2026-02-03 02:23:11.440647 | Job console starting 2026-02-03 02:23:11.451926 | Updating git repos 2026-02-03 02:23:11.538566 | Cloning repos into workspace 2026-02-03 02:23:11.761634 | Restoring repo states 2026-02-03 02:23:11.806091 | Merging changes 2026-02-03 02:23:11.806132 | Checking out repos 2026-02-03 02:23:12.052143 | Preparing playbooks 2026-02-03 02:23:12.721724 | Running Ansible setup 2026-02-03 02:23:17.113049 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-03 02:23:17.866496 | 2026-02-03 02:23:17.866652 | PLAY [Base pre] 2026-02-03 02:23:17.884219 | 2026-02-03 02:23:17.884385 | TASK [Setup log path fact] 2026-02-03 02:23:17.917064 | orchestrator | ok 2026-02-03 02:23:17.937194 | 2026-02-03 02:23:17.937420 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-03 02:23:17.985141 | orchestrator | ok 2026-02-03 02:23:18.001645 | 2026-02-03 02:23:18.001775 | TASK [emit-job-header : Print job information] 2026-02-03 02:23:18.042160 | # Job Information 2026-02-03 02:23:18.042373 | Ansible Version: 2.16.14 2026-02-03 02:23:18.042430 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-03 02:23:18.042467 | Pipeline: periodic-midnight 2026-02-03 02:23:18.042491 | Executor: 521e9411259a 2026-02-03 02:23:18.042513 | Triggered by: https://github.com/osism/testbed 2026-02-03 02:23:18.042535 | Event ID: 33d05aa35aa34cb39d03181114dbe772 2026-02-03 02:23:18.049857 | 2026-02-03 02:23:18.049970 | LOOP [emit-job-header : Print node information] 2026-02-03 02:23:18.183406 | orchestrator | ok: 2026-02-03 02:23:18.183797 | orchestrator | # Node Information 2026-02-03 02:23:18.183867 | orchestrator | Inventory Hostname: orchestrator 2026-02-03 02:23:18.183910 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-03 02:23:18.183950 | orchestrator | Username: zuul-testbed03 2026-02-03 02:23:18.183986 | orchestrator | Distro: Debian 12.13 2026-02-03 02:23:18.184027 | orchestrator | Provider: static-testbed 2026-02-03 02:23:18.184064 | orchestrator | Region: 2026-02-03 02:23:18.184102 | orchestrator | Label: testbed-orchestrator 2026-02-03 02:23:18.184136 | orchestrator | Product Name: OpenStack Nova 2026-02-03 02:23:18.184170 | orchestrator | Interface IP: 81.163.193.140 2026-02-03 02:23:18.215118 | 2026-02-03 02:23:18.215364 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-03 02:23:18.707816 | orchestrator -> localhost | changed 2026-02-03 02:23:18.726811 | 2026-02-03 02:23:18.727030 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-03 02:23:19.807868 | orchestrator -> localhost | changed 2026-02-03 02:23:19.831802 | 2026-02-03 02:23:19.831946 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-03 02:23:20.134007 | orchestrator -> localhost | ok 2026-02-03 02:23:20.151147 | 2026-02-03 02:23:20.151531 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-03 02:23:20.189294 | orchestrator | ok 2026-02-03 02:23:20.210224 | orchestrator | included: /var/lib/zuul/builds/ddf3637b028d45358890c8bfcc4ea9a8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-03 02:23:20.218994 | 2026-02-03 02:23:20.219099 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-03 02:23:21.059853 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-03 02:23:21.060476 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/ddf3637b028d45358890c8bfcc4ea9a8/work/ddf3637b028d45358890c8bfcc4ea9a8_id_rsa 2026-02-03 02:23:21.060603 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/ddf3637b028d45358890c8bfcc4ea9a8/work/ddf3637b028d45358890c8bfcc4ea9a8_id_rsa.pub 2026-02-03 02:23:21.060687 | orchestrator -> localhost | The key fingerprint is: 2026-02-03 02:23:21.060763 | orchestrator -> localhost | SHA256:odU3af2p9B/6vQfWk1JUi9pjUuyUMMQj+OFHKDlv3r4 zuul-build-sshkey 2026-02-03 02:23:21.060830 | orchestrator -> localhost | The key's randomart image is: 2026-02-03 02:23:21.060918 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-03 02:23:21.060984 | orchestrator -> localhost | | o ++ o| 2026-02-03 02:23:21.061048 | orchestrator -> localhost | | = = ++oo..| 2026-02-03 02:23:21.061110 | orchestrator -> localhost | | O = **o. | 2026-02-03 02:23:21.061168 | orchestrator -> localhost | | o * +*. o.| 2026-02-03 02:23:21.061224 | orchestrator -> localhost | | . S oo *.oo| 2026-02-03 02:23:21.061295 | orchestrator -> localhost | | . .+.*o.| 2026-02-03 02:23:21.061381 | orchestrator -> localhost | | . o.+.| 2026-02-03 02:23:21.061438 | orchestrator -> localhost | | . . =| 2026-02-03 02:23:21.061498 | orchestrator -> localhost | | E...o=| 2026-02-03 02:23:21.061557 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-03 02:23:21.061698 | orchestrator -> localhost | ok: Runtime: 0:00:00.300114 2026-02-03 02:23:21.079989 | 2026-02-03 02:23:21.080153 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-03 02:23:21.115472 | orchestrator | ok 2026-02-03 02:23:21.127960 | orchestrator | included: /var/lib/zuul/builds/ddf3637b028d45358890c8bfcc4ea9a8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-03 02:23:21.137548 | 2026-02-03 02:23:21.137652 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-03 02:23:21.161547 | orchestrator | skipping: Conditional result was False 2026-02-03 02:23:21.170083 | 2026-02-03 02:23:21.170195 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-03 02:23:21.817834 | orchestrator | changed 2026-02-03 02:23:21.825293 | 2026-02-03 02:23:21.825458 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-03 02:23:22.157446 | orchestrator | ok 2026-02-03 02:23:22.166585 | 2026-02-03 02:23:22.166712 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-03 02:23:22.642063 | orchestrator | ok 2026-02-03 02:23:22.650246 | 2026-02-03 02:23:22.650386 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-03 02:23:23.095965 | orchestrator | ok 2026-02-03 02:23:23.102212 | 2026-02-03 02:23:23.102330 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-03 02:23:23.126688 | orchestrator | skipping: Conditional result was False 2026-02-03 02:23:23.133379 | 2026-02-03 02:23:23.133478 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-03 02:23:23.594528 | orchestrator -> localhost | changed 2026-02-03 02:23:23.621116 | 2026-02-03 02:23:23.621271 | TASK [add-build-sshkey : Add back temp key] 2026-02-03 02:23:23.993960 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/ddf3637b028d45358890c8bfcc4ea9a8/work/ddf3637b028d45358890c8bfcc4ea9a8_id_rsa (zuul-build-sshkey) 2026-02-03 02:23:23.994530 | orchestrator -> localhost | ok: Runtime: 0:00:00.020608 2026-02-03 02:23:24.009614 | 2026-02-03 02:23:24.009779 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-03 02:23:24.484959 | orchestrator | ok 2026-02-03 02:23:24.494549 | 2026-02-03 02:23:24.494689 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-03 02:23:24.529858 | orchestrator | skipping: Conditional result was False 2026-02-03 02:23:24.591319 | 2026-02-03 02:23:24.591453 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-03 02:23:25.046179 | orchestrator | ok 2026-02-03 02:23:25.062063 | 2026-02-03 02:23:25.062200 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-03 02:23:25.106740 | orchestrator | ok 2026-02-03 02:23:25.118951 | 2026-02-03 02:23:25.119115 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-03 02:23:25.459543 | orchestrator -> localhost | ok 2026-02-03 02:23:25.474088 | 2026-02-03 02:23:25.474246 | TASK [validate-host : Collect information about the host] 2026-02-03 02:23:26.786683 | orchestrator | ok 2026-02-03 02:23:26.800446 | 2026-02-03 02:23:26.800556 | TASK [validate-host : Sanitize hostname] 2026-02-03 02:23:26.886629 | orchestrator | ok 2026-02-03 02:23:26.895923 | 2026-02-03 02:23:26.896134 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-03 02:23:27.494138 | orchestrator -> localhost | changed 2026-02-03 02:23:27.506466 | 2026-02-03 02:23:27.506604 | TASK [validate-host : Collect information about zuul worker] 2026-02-03 02:23:27.947026 | orchestrator | ok 2026-02-03 02:23:27.955788 | 2026-02-03 02:23:27.955928 | TASK [validate-host : Write out all zuul information for each host] 2026-02-03 02:23:28.519111 | orchestrator -> localhost | changed 2026-02-03 02:23:28.530444 | 2026-02-03 02:23:28.530555 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-03 02:23:28.808515 | orchestrator | ok 2026-02-03 02:23:28.817056 | 2026-02-03 02:23:28.817172 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-03 02:23:54.558474 | orchestrator | changed: 2026-02-03 02:23:54.558753 | orchestrator | .d..t...... src/ 2026-02-03 02:23:54.558790 | orchestrator | .d..t...... src/github.com/ 2026-02-03 02:23:54.558816 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-03 02:23:54.563787 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-03 02:23:54.563849 | orchestrator | RedHat.yml 2026-02-03 02:23:54.578380 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-03 02:23:54.578398 | orchestrator | RedHat.yml 2026-02-03 02:23:54.578451 | orchestrator | = 2.2.0"... 2026-02-03 02:24:06.550990 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-03 02:24:06.570449 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-02-03 02:24:06.709463 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-03 02:24:07.151529 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-03 02:24:07.511121 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-03 02:24:08.441067 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-03 02:24:08.507058 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-03 02:24:08.975649 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-03 02:24:08.975691 | orchestrator | 2026-02-03 02:24:08.975696 | orchestrator | Providers are signed by their developers. 2026-02-03 02:24:08.975700 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-03 02:24:08.975703 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-03 02:24:08.975713 | orchestrator | 2026-02-03 02:24:08.975717 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-03 02:24:08.975721 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-03 02:24:08.975738 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-03 02:24:08.975742 | orchestrator | you run "tofu init" in the future. 2026-02-03 02:24:08.975944 | orchestrator | 2026-02-03 02:24:08.975960 | orchestrator | OpenTofu has been successfully initialized! 2026-02-03 02:24:08.975966 | orchestrator | 2026-02-03 02:24:08.975973 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-03 02:24:08.975979 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-03 02:24:08.975991 | orchestrator | should now work. 2026-02-03 02:24:08.975995 | orchestrator | 2026-02-03 02:24:08.975998 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-03 02:24:08.976001 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-03 02:24:08.976005 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-03 02:24:09.143418 | orchestrator | Created and switched to workspace "ci"! 2026-02-03 02:24:09.143526 | orchestrator | 2026-02-03 02:24:09.143535 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-03 02:24:09.143539 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-03 02:24:09.143543 | orchestrator | for this configuration. 2026-02-03 02:24:09.251248 | orchestrator | ci.auto.tfvars 2026-02-03 02:24:09.256905 | orchestrator | default_custom.tf 2026-02-03 02:24:10.131291 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-03 02:24:10.672777 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-03 02:24:10.875216 | orchestrator | 2026-02-03 02:24:10.875275 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-03 02:24:10.875287 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-03 02:24:10.875294 | orchestrator | + create 2026-02-03 02:24:10.875301 | orchestrator | <= read (data resources) 2026-02-03 02:24:10.875308 | orchestrator | 2026-02-03 02:24:10.875314 | orchestrator | OpenTofu will perform the following actions: 2026-02-03 02:24:10.875327 | orchestrator | 2026-02-03 02:24:10.875334 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-03 02:24:10.875340 | orchestrator | # (config refers to values not yet known) 2026-02-03 02:24:10.875346 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-03 02:24:10.875353 | orchestrator | + checksum = (known after apply) 2026-02-03 02:24:10.875360 | orchestrator | + created_at = (known after apply) 2026-02-03 02:24:10.875366 | orchestrator | + file = (known after apply) 2026-02-03 02:24:10.875372 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.875396 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.875403 | orchestrator | + min_disk_gb = (known after apply) 2026-02-03 02:24:10.875409 | orchestrator | + min_ram_mb = (known after apply) 2026-02-03 02:24:10.875415 | orchestrator | + most_recent = true 2026-02-03 02:24:10.875422 | orchestrator | + name = (known after apply) 2026-02-03 02:24:10.875427 | orchestrator | + protected = (known after apply) 2026-02-03 02:24:10.875433 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.875441 | orchestrator | + schema = (known after apply) 2026-02-03 02:24:10.875448 | orchestrator | + size_bytes = (known after apply) 2026-02-03 02:24:10.875454 | orchestrator | + tags = (known after apply) 2026-02-03 02:24:10.875460 | orchestrator | + updated_at = (known after apply) 2026-02-03 02:24:10.875466 | orchestrator | } 2026-02-03 02:24:10.875473 | orchestrator | 2026-02-03 02:24:10.875480 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-03 02:24:10.875486 | orchestrator | # (config refers to values not yet known) 2026-02-03 02:24:10.875526 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-03 02:24:10.875535 | orchestrator | + checksum = (known after apply) 2026-02-03 02:24:10.875541 | orchestrator | + created_at = (known after apply) 2026-02-03 02:24:10.875547 | orchestrator | + file = (known after apply) 2026-02-03 02:24:10.875553 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.875559 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.875565 | orchestrator | + min_disk_gb = (known after apply) 2026-02-03 02:24:10.875571 | orchestrator | + min_ram_mb = (known after apply) 2026-02-03 02:24:10.875577 | orchestrator | + most_recent = true 2026-02-03 02:24:10.875583 | orchestrator | + name = (known after apply) 2026-02-03 02:24:10.875589 | orchestrator | + protected = (known after apply) 2026-02-03 02:24:10.875595 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.875602 | orchestrator | + schema = (known after apply) 2026-02-03 02:24:10.875608 | orchestrator | + size_bytes = (known after apply) 2026-02-03 02:24:10.875615 | orchestrator | + tags = (known after apply) 2026-02-03 02:24:10.875621 | orchestrator | + updated_at = (known after apply) 2026-02-03 02:24:10.875628 | orchestrator | } 2026-02-03 02:24:10.875639 | orchestrator | 2026-02-03 02:24:10.875645 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-03 02:24:10.875652 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-03 02:24:10.875658 | orchestrator | + content = (known after apply) 2026-02-03 02:24:10.875665 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-03 02:24:10.875671 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-03 02:24:10.875678 | orchestrator | + content_md5 = (known after apply) 2026-02-03 02:24:10.875683 | orchestrator | + content_sha1 = (known after apply) 2026-02-03 02:24:10.875690 | orchestrator | + content_sha256 = (known after apply) 2026-02-03 02:24:10.875696 | orchestrator | + content_sha512 = (known after apply) 2026-02-03 02:24:10.875702 | orchestrator | + directory_permission = "0777" 2026-02-03 02:24:10.875708 | orchestrator | + file_permission = "0644" 2026-02-03 02:24:10.875714 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-03 02:24:10.875721 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.875728 | orchestrator | } 2026-02-03 02:24:10.875734 | orchestrator | 2026-02-03 02:24:10.875740 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-03 02:24:10.875746 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-03 02:24:10.875753 | orchestrator | + content = (known after apply) 2026-02-03 02:24:10.875759 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-03 02:24:10.875765 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-03 02:24:10.875771 | orchestrator | + content_md5 = (known after apply) 2026-02-03 02:24:10.875777 | orchestrator | + content_sha1 = (known after apply) 2026-02-03 02:24:10.875783 | orchestrator | + content_sha256 = (known after apply) 2026-02-03 02:24:10.875790 | orchestrator | + content_sha512 = (known after apply) 2026-02-03 02:24:10.875796 | orchestrator | + directory_permission = "0777" 2026-02-03 02:24:10.875802 | orchestrator | + file_permission = "0644" 2026-02-03 02:24:10.875817 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-03 02:24:10.875823 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.875829 | orchestrator | } 2026-02-03 02:24:10.875835 | orchestrator | 2026-02-03 02:24:10.875849 | orchestrator | # local_file.inventory will be created 2026-02-03 02:24:10.875856 | orchestrator | + resource "local_file" "inventory" { 2026-02-03 02:24:10.875862 | orchestrator | + content = (known after apply) 2026-02-03 02:24:10.875868 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-03 02:24:10.875874 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-03 02:24:10.875880 | orchestrator | + content_md5 = (known after apply) 2026-02-03 02:24:10.875886 | orchestrator | + content_sha1 = (known after apply) 2026-02-03 02:24:10.875893 | orchestrator | + content_sha256 = (known after apply) 2026-02-03 02:24:10.875899 | orchestrator | + content_sha512 = (known after apply) 2026-02-03 02:24:10.875905 | orchestrator | + directory_permission = "0777" 2026-02-03 02:24:10.875911 | orchestrator | + file_permission = "0644" 2026-02-03 02:24:10.875917 | orchestrator | + filename = "inventory.ci" 2026-02-03 02:24:10.875923 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.875929 | orchestrator | } 2026-02-03 02:24:10.875935 | orchestrator | 2026-02-03 02:24:10.875941 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-03 02:24:10.875947 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-03 02:24:10.875953 | orchestrator | + content = (sensitive value) 2026-02-03 02:24:10.875959 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-03 02:24:10.875966 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-03 02:24:10.875972 | orchestrator | + content_md5 = (known after apply) 2026-02-03 02:24:10.875978 | orchestrator | + content_sha1 = (known after apply) 2026-02-03 02:24:10.875985 | orchestrator | + content_sha256 = (known after apply) 2026-02-03 02:24:10.875991 | orchestrator | + content_sha512 = (known after apply) 2026-02-03 02:24:10.875997 | orchestrator | + directory_permission = "0700" 2026-02-03 02:24:10.876004 | orchestrator | + file_permission = "0600" 2026-02-03 02:24:10.876009 | orchestrator | + filename = ".id_rsa.ci" 2026-02-03 02:24:10.876016 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876022 | orchestrator | } 2026-02-03 02:24:10.876028 | orchestrator | 2026-02-03 02:24:10.876035 | orchestrator | # null_resource.node_semaphore will be created 2026-02-03 02:24:10.876041 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-03 02:24:10.876048 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876054 | orchestrator | } 2026-02-03 02:24:10.876060 | orchestrator | 2026-02-03 02:24:10.876067 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-03 02:24:10.876074 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-03 02:24:10.876080 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876087 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876093 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876100 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.876106 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876112 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-03 02:24:10.876118 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876124 | orchestrator | + size = 80 2026-02-03 02:24:10.876130 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876137 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876143 | orchestrator | } 2026-02-03 02:24:10.876149 | orchestrator | 2026-02-03 02:24:10.876155 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-03 02:24:10.876162 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-03 02:24:10.876168 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876175 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876181 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876193 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.876200 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876206 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-03 02:24:10.876212 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876218 | orchestrator | + size = 80 2026-02-03 02:24:10.876225 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876231 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876237 | orchestrator | } 2026-02-03 02:24:10.876249 | orchestrator | 2026-02-03 02:24:10.876255 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-03 02:24:10.876262 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-03 02:24:10.876269 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876275 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876282 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876288 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.876295 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876301 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-03 02:24:10.876308 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876314 | orchestrator | + size = 80 2026-02-03 02:24:10.876321 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876327 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876334 | orchestrator | } 2026-02-03 02:24:10.876341 | orchestrator | 2026-02-03 02:24:10.876347 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-03 02:24:10.876353 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-03 02:24:10.876360 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876367 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876373 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876380 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.876386 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876393 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-03 02:24:10.876399 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876405 | orchestrator | + size = 80 2026-02-03 02:24:10.876412 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876418 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876425 | orchestrator | } 2026-02-03 02:24:10.876431 | orchestrator | 2026-02-03 02:24:10.876437 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-03 02:24:10.876443 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-03 02:24:10.876449 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876455 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876461 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876467 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.876473 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876485 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-03 02:24:10.876491 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876512 | orchestrator | + size = 80 2026-02-03 02:24:10.876518 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876525 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876531 | orchestrator | } 2026-02-03 02:24:10.876537 | orchestrator | 2026-02-03 02:24:10.876543 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-03 02:24:10.876549 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-03 02:24:10.876555 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876561 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876568 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876580 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.876586 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876592 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-03 02:24:10.876598 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876605 | orchestrator | + size = 80 2026-02-03 02:24:10.876611 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876617 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876623 | orchestrator | } 2026-02-03 02:24:10.876629 | orchestrator | 2026-02-03 02:24:10.876635 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-03 02:24:10.876641 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-03 02:24:10.876647 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876653 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876659 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876665 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.876670 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876677 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-03 02:24:10.876683 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876689 | orchestrator | + size = 80 2026-02-03 02:24:10.876695 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876701 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876708 | orchestrator | } 2026-02-03 02:24:10.876713 | orchestrator | 2026-02-03 02:24:10.876720 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-03 02:24:10.876727 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-03 02:24:10.876733 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876740 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876745 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876751 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876757 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-03 02:24:10.876763 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876769 | orchestrator | + size = 20 2026-02-03 02:24:10.876776 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876783 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876789 | orchestrator | } 2026-02-03 02:24:10.876793 | orchestrator | 2026-02-03 02:24:10.876797 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-03 02:24:10.876800 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-03 02:24:10.876804 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876808 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876812 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876816 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876820 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-03 02:24:10.876823 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876827 | orchestrator | + size = 20 2026-02-03 02:24:10.876836 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876840 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876844 | orchestrator | } 2026-02-03 02:24:10.876848 | orchestrator | 2026-02-03 02:24:10.876852 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-03 02:24:10.876856 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-03 02:24:10.876860 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876863 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876867 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876871 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876875 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-03 02:24:10.876878 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876887 | orchestrator | + size = 20 2026-02-03 02:24:10.876891 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876895 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876899 | orchestrator | } 2026-02-03 02:24:10.876902 | orchestrator | 2026-02-03 02:24:10.876906 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-03 02:24:10.876910 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-03 02:24:10.876914 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876917 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876922 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.876928 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.876934 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-03 02:24:10.876940 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.876946 | orchestrator | + size = 20 2026-02-03 02:24:10.876953 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.876958 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.876964 | orchestrator | } 2026-02-03 02:24:10.876970 | orchestrator | 2026-02-03 02:24:10.876976 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-03 02:24:10.876981 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-03 02:24:10.876987 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.876993 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.876998 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.877004 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.877009 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-03 02:24:10.877015 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.877025 | orchestrator | + size = 20 2026-02-03 02:24:10.877031 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.877037 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.877043 | orchestrator | } 2026-02-03 02:24:10.877049 | orchestrator | 2026-02-03 02:24:10.877056 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-03 02:24:10.877062 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-03 02:24:10.877069 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.877075 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.877081 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.877087 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.877094 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-03 02:24:10.877100 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.877106 | orchestrator | + size = 20 2026-02-03 02:24:10.877113 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.877119 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.877126 | orchestrator | } 2026-02-03 02:24:10.877132 | orchestrator | 2026-02-03 02:24:10.877137 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-03 02:24:10.877141 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-03 02:24:10.877145 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.877149 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.877152 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.877156 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.877160 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-03 02:24:10.877164 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.877167 | orchestrator | + size = 20 2026-02-03 02:24:10.877171 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.877175 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.877180 | orchestrator | } 2026-02-03 02:24:10.877187 | orchestrator | 2026-02-03 02:24:10.877194 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-03 02:24:10.877199 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-03 02:24:10.877211 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.877217 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.877224 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.877230 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.877237 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-03 02:24:10.877243 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.877250 | orchestrator | + size = 20 2026-02-03 02:24:10.877257 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.877263 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.877269 | orchestrator | } 2026-02-03 02:24:10.877273 | orchestrator | 2026-02-03 02:24:10.877277 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-03 02:24:10.877281 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-03 02:24:10.877285 | orchestrator | + attachment = (known after apply) 2026-02-03 02:24:10.877288 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.877292 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.877296 | orchestrator | + metadata = (known after apply) 2026-02-03 02:24:10.877300 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-03 02:24:10.877303 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.877307 | orchestrator | + size = 20 2026-02-03 02:24:10.877311 | orchestrator | + volume_retype_policy = "never" 2026-02-03 02:24:10.877315 | orchestrator | + volume_type = "ssd" 2026-02-03 02:24:10.877318 | orchestrator | } 2026-02-03 02:24:10.877322 | orchestrator | 2026-02-03 02:24:10.877326 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-03 02:24:10.877330 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-03 02:24:10.877333 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-03 02:24:10.877341 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-03 02:24:10.877345 | orchestrator | + all_metadata = (known after apply) 2026-02-03 02:24:10.877348 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.877352 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.877356 | orchestrator | + config_drive = true 2026-02-03 02:24:10.877360 | orchestrator | + created = (known after apply) 2026-02-03 02:24:10.877364 | orchestrator | + flavor_id = (known after apply) 2026-02-03 02:24:10.877367 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-03 02:24:10.877371 | orchestrator | + force_delete = false 2026-02-03 02:24:10.877375 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-03 02:24:10.877378 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.877382 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.877386 | orchestrator | + image_name = (known after apply) 2026-02-03 02:24:10.877390 | orchestrator | + key_pair = "testbed" 2026-02-03 02:24:10.877393 | orchestrator | + name = "testbed-manager" 2026-02-03 02:24:10.877397 | orchestrator | + power_state = "active" 2026-02-03 02:24:10.877401 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.877405 | orchestrator | + security_groups = (known after apply) 2026-02-03 02:24:10.877408 | orchestrator | + stop_before_destroy = false 2026-02-03 02:24:10.877412 | orchestrator | + updated = (known after apply) 2026-02-03 02:24:10.877416 | orchestrator | + user_data = (sensitive value) 2026-02-03 02:24:10.877420 | orchestrator | 2026-02-03 02:24:10.877424 | orchestrator | + block_device { 2026-02-03 02:24:10.877428 | orchestrator | + boot_index = 0 2026-02-03 02:24:10.877432 | orchestrator | + delete_on_termination = false 2026-02-03 02:24:10.877439 | orchestrator | + destination_type = "volume" 2026-02-03 02:24:10.877443 | orchestrator | + multiattach = false 2026-02-03 02:24:10.877447 | orchestrator | + source_type = "volume" 2026-02-03 02:24:10.877450 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.877457 | orchestrator | } 2026-02-03 02:24:10.877461 | orchestrator | 2026-02-03 02:24:10.877465 | orchestrator | + network { 2026-02-03 02:24:10.877468 | orchestrator | + access_network = false 2026-02-03 02:24:10.877472 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-03 02:24:10.877476 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-03 02:24:10.877480 | orchestrator | + mac = (known after apply) 2026-02-03 02:24:10.877484 | orchestrator | + name = (known after apply) 2026-02-03 02:24:10.877487 | orchestrator | + port = (known after apply) 2026-02-03 02:24:10.877491 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.877516 | orchestrator | } 2026-02-03 02:24:10.877520 | orchestrator | } 2026-02-03 02:24:10.877524 | orchestrator | 2026-02-03 02:24:10.877528 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-03 02:24:10.877532 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-03 02:24:10.877536 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-03 02:24:10.877539 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-03 02:24:10.877543 | orchestrator | + all_metadata = (known after apply) 2026-02-03 02:24:10.877547 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.877550 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.877554 | orchestrator | + config_drive = true 2026-02-03 02:24:10.877558 | orchestrator | + created = (known after apply) 2026-02-03 02:24:10.877562 | orchestrator | + flavor_id = (known after apply) 2026-02-03 02:24:10.877565 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-03 02:24:10.877569 | orchestrator | + force_delete = false 2026-02-03 02:24:10.877573 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-03 02:24:10.877577 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.877580 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.877584 | orchestrator | + image_name = (known after apply) 2026-02-03 02:24:10.877588 | orchestrator | + key_pair = "testbed" 2026-02-03 02:24:10.877592 | orchestrator | + name = "testbed-node-0" 2026-02-03 02:24:10.877595 | orchestrator | + power_state = "active" 2026-02-03 02:24:10.877599 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.877603 | orchestrator | + security_groups = (known after apply) 2026-02-03 02:24:10.877606 | orchestrator | + stop_before_destroy = false 2026-02-03 02:24:10.877610 | orchestrator | + updated = (known after apply) 2026-02-03 02:24:10.877614 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-03 02:24:10.877618 | orchestrator | 2026-02-03 02:24:10.877622 | orchestrator | + block_device { 2026-02-03 02:24:10.877625 | orchestrator | + boot_index = 0 2026-02-03 02:24:10.877629 | orchestrator | + delete_on_termination = false 2026-02-03 02:24:10.877633 | orchestrator | + destination_type = "volume" 2026-02-03 02:24:10.877637 | orchestrator | + multiattach = false 2026-02-03 02:24:10.877643 | orchestrator | + source_type = "volume" 2026-02-03 02:24:10.877649 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.877655 | orchestrator | } 2026-02-03 02:24:10.877661 | orchestrator | 2026-02-03 02:24:10.877667 | orchestrator | + network { 2026-02-03 02:24:10.877672 | orchestrator | + access_network = false 2026-02-03 02:24:10.877679 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-03 02:24:10.877684 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-03 02:24:10.877688 | orchestrator | + mac = (known after apply) 2026-02-03 02:24:10.877691 | orchestrator | + name = (known after apply) 2026-02-03 02:24:10.877695 | orchestrator | + port = (known after apply) 2026-02-03 02:24:10.877699 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.877703 | orchestrator | } 2026-02-03 02:24:10.877706 | orchestrator | } 2026-02-03 02:24:10.877710 | orchestrator | 2026-02-03 02:24:10.877714 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-03 02:24:10.877718 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-03 02:24:10.877722 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-03 02:24:10.877729 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-03 02:24:10.877733 | orchestrator | + all_metadata = (known after apply) 2026-02-03 02:24:10.877737 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.877741 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.877744 | orchestrator | + config_drive = true 2026-02-03 02:24:10.877748 | orchestrator | + created = (known after apply) 2026-02-03 02:24:10.877752 | orchestrator | + flavor_id = (known after apply) 2026-02-03 02:24:10.877756 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-03 02:24:10.877759 | orchestrator | + force_delete = false 2026-02-03 02:24:10.877763 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-03 02:24:10.877770 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.877774 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.877777 | orchestrator | + image_name = (known after apply) 2026-02-03 02:24:10.877781 | orchestrator | + key_pair = "testbed" 2026-02-03 02:24:10.877785 | orchestrator | + name = "testbed-node-1" 2026-02-03 02:24:10.877788 | orchestrator | + power_state = "active" 2026-02-03 02:24:10.877792 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.877796 | orchestrator | + security_groups = (known after apply) 2026-02-03 02:24:10.877800 | orchestrator | + stop_before_destroy = false 2026-02-03 02:24:10.877803 | orchestrator | + updated = (known after apply) 2026-02-03 02:24:10.877807 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-03 02:24:10.877811 | orchestrator | 2026-02-03 02:24:10.877815 | orchestrator | + block_device { 2026-02-03 02:24:10.877819 | orchestrator | + boot_index = 0 2026-02-03 02:24:10.877822 | orchestrator | + delete_on_termination = false 2026-02-03 02:24:10.877826 | orchestrator | + destination_type = "volume" 2026-02-03 02:24:10.877830 | orchestrator | + multiattach = false 2026-02-03 02:24:10.877833 | orchestrator | + source_type = "volume" 2026-02-03 02:24:10.877837 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.877841 | orchestrator | } 2026-02-03 02:24:10.877845 | orchestrator | 2026-02-03 02:24:10.877849 | orchestrator | + network { 2026-02-03 02:24:10.877852 | orchestrator | + access_network = false 2026-02-03 02:24:10.877856 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-03 02:24:10.877860 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-03 02:24:10.877864 | orchestrator | + mac = (known after apply) 2026-02-03 02:24:10.877868 | orchestrator | + name = (known after apply) 2026-02-03 02:24:10.877871 | orchestrator | + port = (known after apply) 2026-02-03 02:24:10.877875 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.877879 | orchestrator | } 2026-02-03 02:24:10.877882 | orchestrator | } 2026-02-03 02:24:10.877887 | orchestrator | 2026-02-03 02:24:10.877893 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-03 02:24:10.877899 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-03 02:24:10.877906 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-03 02:24:10.877912 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-03 02:24:10.877918 | orchestrator | + all_metadata = (known after apply) 2026-02-03 02:24:10.877923 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.877933 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.877939 | orchestrator | + config_drive = true 2026-02-03 02:24:10.877945 | orchestrator | + created = (known after apply) 2026-02-03 02:24:10.877951 | orchestrator | + flavor_id = (known after apply) 2026-02-03 02:24:10.877957 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-03 02:24:10.877963 | orchestrator | + force_delete = false 2026-02-03 02:24:10.877969 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-03 02:24:10.877975 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.877981 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.877999 | orchestrator | + image_name = (known after apply) 2026-02-03 02:24:10.878006 | orchestrator | + key_pair = "testbed" 2026-02-03 02:24:10.878035 | orchestrator | + name = "testbed-node-2" 2026-02-03 02:24:10.878046 | orchestrator | + power_state = "active" 2026-02-03 02:24:10.878053 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.878060 | orchestrator | + security_groups = (known after apply) 2026-02-03 02:24:10.878067 | orchestrator | + stop_before_destroy = false 2026-02-03 02:24:10.878075 | orchestrator | + updated = (known after apply) 2026-02-03 02:24:10.878082 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-03 02:24:10.878089 | orchestrator | 2026-02-03 02:24:10.878096 | orchestrator | + block_device { 2026-02-03 02:24:10.878104 | orchestrator | + boot_index = 0 2026-02-03 02:24:10.878111 | orchestrator | + delete_on_termination = false 2026-02-03 02:24:10.878118 | orchestrator | + destination_type = "volume" 2026-02-03 02:24:10.878126 | orchestrator | + multiattach = false 2026-02-03 02:24:10.878133 | orchestrator | + source_type = "volume" 2026-02-03 02:24:10.878139 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.878147 | orchestrator | } 2026-02-03 02:24:10.878153 | orchestrator | 2026-02-03 02:24:10.878161 | orchestrator | + network { 2026-02-03 02:24:10.878167 | orchestrator | + access_network = false 2026-02-03 02:24:10.878173 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-03 02:24:10.878180 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-03 02:24:10.878186 | orchestrator | + mac = (known after apply) 2026-02-03 02:24:10.878192 | orchestrator | + name = (known after apply) 2026-02-03 02:24:10.878198 | orchestrator | + port = (known after apply) 2026-02-03 02:24:10.878205 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.878211 | orchestrator | } 2026-02-03 02:24:10.878217 | orchestrator | } 2026-02-03 02:24:10.878223 | orchestrator | 2026-02-03 02:24:10.878230 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-03 02:24:10.878236 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-03 02:24:10.878243 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-03 02:24:10.878249 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-03 02:24:10.878256 | orchestrator | + all_metadata = (known after apply) 2026-02-03 02:24:10.878262 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.878268 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.878275 | orchestrator | + config_drive = true 2026-02-03 02:24:10.878281 | orchestrator | + created = (known after apply) 2026-02-03 02:24:10.878287 | orchestrator | + flavor_id = (known after apply) 2026-02-03 02:24:10.878294 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-03 02:24:10.878300 | orchestrator | + force_delete = false 2026-02-03 02:24:10.878307 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-03 02:24:10.878314 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.878321 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.878327 | orchestrator | + image_name = (known after apply) 2026-02-03 02:24:10.878334 | orchestrator | + key_pair = "testbed" 2026-02-03 02:24:10.878340 | orchestrator | + name = "testbed-node-3" 2026-02-03 02:24:10.878347 | orchestrator | + power_state = "active" 2026-02-03 02:24:10.878353 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.878360 | orchestrator | + security_groups = (known after apply) 2026-02-03 02:24:10.878366 | orchestrator | + stop_before_destroy = false 2026-02-03 02:24:10.878373 | orchestrator | + updated = (known after apply) 2026-02-03 02:24:10.878386 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-03 02:24:10.878394 | orchestrator | 2026-02-03 02:24:10.878400 | orchestrator | + block_device { 2026-02-03 02:24:10.878412 | orchestrator | + boot_index = 0 2026-02-03 02:24:10.878418 | orchestrator | + delete_on_termination = false 2026-02-03 02:24:10.878425 | orchestrator | + destination_type = "volume" 2026-02-03 02:24:10.878437 | orchestrator | + multiattach = false 2026-02-03 02:24:10.878443 | orchestrator | + source_type = "volume" 2026-02-03 02:24:10.878450 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.878457 | orchestrator | } 2026-02-03 02:24:10.878463 | orchestrator | 2026-02-03 02:24:10.878470 | orchestrator | + network { 2026-02-03 02:24:10.878477 | orchestrator | + access_network = false 2026-02-03 02:24:10.878484 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-03 02:24:10.878490 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-03 02:24:10.878528 | orchestrator | + mac = (known after apply) 2026-02-03 02:24:10.878534 | orchestrator | + name = (known after apply) 2026-02-03 02:24:10.878540 | orchestrator | + port = (known after apply) 2026-02-03 02:24:10.878546 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.878553 | orchestrator | } 2026-02-03 02:24:10.878559 | orchestrator | } 2026-02-03 02:24:10.878564 | orchestrator | 2026-02-03 02:24:10.878570 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-03 02:24:10.878576 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-03 02:24:10.878582 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-03 02:24:10.878589 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-03 02:24:10.878594 | orchestrator | + all_metadata = (known after apply) 2026-02-03 02:24:10.878600 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.878607 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.878612 | orchestrator | + config_drive = true 2026-02-03 02:24:10.878619 | orchestrator | + created = (known after apply) 2026-02-03 02:24:10.878625 | orchestrator | + flavor_id = (known after apply) 2026-02-03 02:24:10.878631 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-03 02:24:10.878638 | orchestrator | + force_delete = false 2026-02-03 02:24:10.878644 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-03 02:24:10.878650 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.878656 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.878662 | orchestrator | + image_name = (known after apply) 2026-02-03 02:24:10.878668 | orchestrator | + key_pair = "testbed" 2026-02-03 02:24:10.878674 | orchestrator | + name = "testbed-node-4" 2026-02-03 02:24:10.878680 | orchestrator | + power_state = "active" 2026-02-03 02:24:10.878686 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.878692 | orchestrator | + security_groups = (known after apply) 2026-02-03 02:24:10.878699 | orchestrator | + stop_before_destroy = false 2026-02-03 02:24:10.878705 | orchestrator | + updated = (known after apply) 2026-02-03 02:24:10.878710 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-03 02:24:10.878716 | orchestrator | 2026-02-03 02:24:10.878721 | orchestrator | + block_device { 2026-02-03 02:24:10.878727 | orchestrator | + boot_index = 0 2026-02-03 02:24:10.878732 | orchestrator | + delete_on_termination = false 2026-02-03 02:24:10.878738 | orchestrator | + destination_type = "volume" 2026-02-03 02:24:10.878744 | orchestrator | + multiattach = false 2026-02-03 02:24:10.878750 | orchestrator | + source_type = "volume" 2026-02-03 02:24:10.878756 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.878762 | orchestrator | } 2026-02-03 02:24:10.878768 | orchestrator | 2026-02-03 02:24:10.878775 | orchestrator | + network { 2026-02-03 02:24:10.878781 | orchestrator | + access_network = false 2026-02-03 02:24:10.878788 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-03 02:24:10.878794 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-03 02:24:10.878800 | orchestrator | + mac = (known after apply) 2026-02-03 02:24:10.878806 | orchestrator | + name = (known after apply) 2026-02-03 02:24:10.878812 | orchestrator | + port = (known after apply) 2026-02-03 02:24:10.878818 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.878825 | orchestrator | } 2026-02-03 02:24:10.878831 | orchestrator | } 2026-02-03 02:24:10.878844 | orchestrator | 2026-02-03 02:24:10.878850 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-03 02:24:10.878856 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-03 02:24:10.878863 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-03 02:24:10.878869 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-03 02:24:10.878875 | orchestrator | + all_metadata = (known after apply) 2026-02-03 02:24:10.878882 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.878887 | orchestrator | + availability_zone = "nova" 2026-02-03 02:24:10.878894 | orchestrator | + config_drive = true 2026-02-03 02:24:10.878900 | orchestrator | + created = (known after apply) 2026-02-03 02:24:10.878906 | orchestrator | + flavor_id = (known after apply) 2026-02-03 02:24:10.878912 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-03 02:24:10.878918 | orchestrator | + force_delete = false 2026-02-03 02:24:10.878928 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-03 02:24:10.878934 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.878940 | orchestrator | + image_id = (known after apply) 2026-02-03 02:24:10.878947 | orchestrator | + image_name = (known after apply) 2026-02-03 02:24:10.878952 | orchestrator | + key_pair = "testbed" 2026-02-03 02:24:10.878958 | orchestrator | + name = "testbed-node-5" 2026-02-03 02:24:10.878964 | orchestrator | + power_state = "active" 2026-02-03 02:24:10.878971 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.878978 | orchestrator | + security_groups = (known after apply) 2026-02-03 02:24:10.878984 | orchestrator | + stop_before_destroy = false 2026-02-03 02:24:10.878990 | orchestrator | + updated = (known after apply) 2026-02-03 02:24:10.878997 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-03 02:24:10.879003 | orchestrator | 2026-02-03 02:24:10.879009 | orchestrator | + block_device { 2026-02-03 02:24:10.879015 | orchestrator | + boot_index = 0 2026-02-03 02:24:10.879021 | orchestrator | + delete_on_termination = false 2026-02-03 02:24:10.879027 | orchestrator | + destination_type = "volume" 2026-02-03 02:24:10.879033 | orchestrator | + multiattach = false 2026-02-03 02:24:10.879039 | orchestrator | + source_type = "volume" 2026-02-03 02:24:10.879046 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.879052 | orchestrator | } 2026-02-03 02:24:10.879058 | orchestrator | 2026-02-03 02:24:10.879064 | orchestrator | + network { 2026-02-03 02:24:10.879070 | orchestrator | + access_network = false 2026-02-03 02:24:10.879082 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-03 02:24:10.879088 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-03 02:24:10.879095 | orchestrator | + mac = (known after apply) 2026-02-03 02:24:10.879102 | orchestrator | + name = (known after apply) 2026-02-03 02:24:10.879108 | orchestrator | + port = (known after apply) 2026-02-03 02:24:10.879114 | orchestrator | + uuid = (known after apply) 2026-02-03 02:24:10.879121 | orchestrator | } 2026-02-03 02:24:10.879127 | orchestrator | } 2026-02-03 02:24:10.879134 | orchestrator | 2026-02-03 02:24:10.879141 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-03 02:24:10.879147 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-03 02:24:10.879154 | orchestrator | + fingerprint = (known after apply) 2026-02-03 02:24:10.879160 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879166 | orchestrator | + name = "testbed" 2026-02-03 02:24:10.879172 | orchestrator | + private_key = (sensitive value) 2026-02-03 02:24:10.879178 | orchestrator | + public_key = (known after apply) 2026-02-03 02:24:10.879184 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879190 | orchestrator | + user_id = (known after apply) 2026-02-03 02:24:10.879196 | orchestrator | } 2026-02-03 02:24:10.879202 | orchestrator | 2026-02-03 02:24:10.879208 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-03 02:24:10.879214 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-03 02:24:10.879226 | orchestrator | + device = (known after apply) 2026-02-03 02:24:10.879232 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879238 | orchestrator | + instance_id = (known after apply) 2026-02-03 02:24:10.879244 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879250 | orchestrator | + volume_id = (known after apply) 2026-02-03 02:24:10.879257 | orchestrator | } 2026-02-03 02:24:10.879263 | orchestrator | 2026-02-03 02:24:10.879269 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-03 02:24:10.879276 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-03 02:24:10.879282 | orchestrator | + device = (known after apply) 2026-02-03 02:24:10.879288 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879294 | orchestrator | + instance_id = (known after apply) 2026-02-03 02:24:10.879300 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879306 | orchestrator | + volume_id = (known after apply) 2026-02-03 02:24:10.879312 | orchestrator | } 2026-02-03 02:24:10.879319 | orchestrator | 2026-02-03 02:24:10.879325 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-03 02:24:10.879331 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-03 02:24:10.879337 | orchestrator | + device = (known after apply) 2026-02-03 02:24:10.879343 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879350 | orchestrator | + instance_id = (known after apply) 2026-02-03 02:24:10.879356 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879362 | orchestrator | + volume_id = (known after apply) 2026-02-03 02:24:10.879368 | orchestrator | } 2026-02-03 02:24:10.879374 | orchestrator | 2026-02-03 02:24:10.879380 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-03 02:24:10.879386 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-03 02:24:10.879393 | orchestrator | + device = (known after apply) 2026-02-03 02:24:10.879399 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879405 | orchestrator | + instance_id = (known after apply) 2026-02-03 02:24:10.879411 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879418 | orchestrator | + volume_id = (known after apply) 2026-02-03 02:24:10.879424 | orchestrator | } 2026-02-03 02:24:10.879431 | orchestrator | 2026-02-03 02:24:10.879437 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-03 02:24:10.879444 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-03 02:24:10.879450 | orchestrator | + device = (known after apply) 2026-02-03 02:24:10.879457 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879463 | orchestrator | + instance_id = (known after apply) 2026-02-03 02:24:10.879474 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879481 | orchestrator | + volume_id = (known after apply) 2026-02-03 02:24:10.879487 | orchestrator | } 2026-02-03 02:24:10.879507 | orchestrator | 2026-02-03 02:24:10.879514 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-03 02:24:10.879520 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-03 02:24:10.879526 | orchestrator | + device = (known after apply) 2026-02-03 02:24:10.879532 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879538 | orchestrator | + instance_id = (known after apply) 2026-02-03 02:24:10.879544 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879550 | orchestrator | + volume_id = (known after apply) 2026-02-03 02:24:10.879556 | orchestrator | } 2026-02-03 02:24:10.879562 | orchestrator | 2026-02-03 02:24:10.879569 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-03 02:24:10.879576 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-03 02:24:10.879582 | orchestrator | + device = (known after apply) 2026-02-03 02:24:10.879588 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879594 | orchestrator | + instance_id = (known after apply) 2026-02-03 02:24:10.879601 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879612 | orchestrator | + volume_id = (known after apply) 2026-02-03 02:24:10.879618 | orchestrator | } 2026-02-03 02:24:10.879624 | orchestrator | 2026-02-03 02:24:10.879631 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-03 02:24:10.879637 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-03 02:24:10.879643 | orchestrator | + device = (known after apply) 2026-02-03 02:24:10.879649 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879655 | orchestrator | + instance_id = (known after apply) 2026-02-03 02:24:10.879661 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879667 | orchestrator | + volume_id = (known after apply) 2026-02-03 02:24:10.879674 | orchestrator | } 2026-02-03 02:24:10.879679 | orchestrator | 2026-02-03 02:24:10.879685 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-03 02:24:10.879692 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-03 02:24:10.879698 | orchestrator | + device = (known after apply) 2026-02-03 02:24:10.879704 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879710 | orchestrator | + instance_id = (known after apply) 2026-02-03 02:24:10.879716 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879723 | orchestrator | + volume_id = (known after apply) 2026-02-03 02:24:10.879729 | orchestrator | } 2026-02-03 02:24:10.879735 | orchestrator | 2026-02-03 02:24:10.879750 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-03 02:24:10.879758 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-03 02:24:10.879764 | orchestrator | + fixed_ip = (known after apply) 2026-02-03 02:24:10.879768 | orchestrator | + floating_ip = (known after apply) 2026-02-03 02:24:10.879772 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879776 | orchestrator | + port_id = (known after apply) 2026-02-03 02:24:10.879780 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879784 | orchestrator | } 2026-02-03 02:24:10.879787 | orchestrator | 2026-02-03 02:24:10.879791 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-03 02:24:10.879795 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-03 02:24:10.879799 | orchestrator | + address = (known after apply) 2026-02-03 02:24:10.879803 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.879806 | orchestrator | + dns_domain = (known after apply) 2026-02-03 02:24:10.879810 | orchestrator | + dns_name = (known after apply) 2026-02-03 02:24:10.879814 | orchestrator | + fixed_ip = (known after apply) 2026-02-03 02:24:10.879818 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879822 | orchestrator | + pool = "public" 2026-02-03 02:24:10.879826 | orchestrator | + port_id = (known after apply) 2026-02-03 02:24:10.879830 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879834 | orchestrator | + subnet_id = (known after apply) 2026-02-03 02:24:10.879837 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.879841 | orchestrator | } 2026-02-03 02:24:10.879845 | orchestrator | 2026-02-03 02:24:10.879849 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-03 02:24:10.879852 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-03 02:24:10.879856 | orchestrator | + admin_state_up = (known after apply) 2026-02-03 02:24:10.879860 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.879864 | orchestrator | + availability_zone_hints = [ 2026-02-03 02:24:10.879868 | orchestrator | + "nova", 2026-02-03 02:24:10.879872 | orchestrator | ] 2026-02-03 02:24:10.879875 | orchestrator | + dns_domain = (known after apply) 2026-02-03 02:24:10.879879 | orchestrator | + external = (known after apply) 2026-02-03 02:24:10.879883 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879887 | orchestrator | + mtu = (known after apply) 2026-02-03 02:24:10.879891 | orchestrator | + name = "net-testbed-management" 2026-02-03 02:24:10.879894 | orchestrator | + port_security_enabled = (known after apply) 2026-02-03 02:24:10.879902 | orchestrator | + qos_policy_id = (known after apply) 2026-02-03 02:24:10.879907 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.879910 | orchestrator | + shared = (known after apply) 2026-02-03 02:24:10.879914 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.879918 | orchestrator | + transparent_vlan = (known after apply) 2026-02-03 02:24:10.879922 | orchestrator | 2026-02-03 02:24:10.879925 | orchestrator | + segments (known after apply) 2026-02-03 02:24:10.879929 | orchestrator | } 2026-02-03 02:24:10.879933 | orchestrator | 2026-02-03 02:24:10.879937 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-03 02:24:10.879940 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-03 02:24:10.879944 | orchestrator | + admin_state_up = (known after apply) 2026-02-03 02:24:10.879948 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-03 02:24:10.879952 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-03 02:24:10.879959 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.879963 | orchestrator | + device_id = (known after apply) 2026-02-03 02:24:10.879967 | orchestrator | + device_owner = (known after apply) 2026-02-03 02:24:10.879971 | orchestrator | + dns_assignment = (known after apply) 2026-02-03 02:24:10.879975 | orchestrator | + dns_name = (known after apply) 2026-02-03 02:24:10.879978 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.879982 | orchestrator | + mac_address = (known after apply) 2026-02-03 02:24:10.879986 | orchestrator | + network_id = (known after apply) 2026-02-03 02:24:10.879990 | orchestrator | + port_security_enabled = (known after apply) 2026-02-03 02:24:10.879993 | orchestrator | + qos_policy_id = (known after apply) 2026-02-03 02:24:10.879997 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.880001 | orchestrator | + security_group_ids = (known after apply) 2026-02-03 02:24:10.880005 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.880008 | orchestrator | 2026-02-03 02:24:10.880012 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880016 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-03 02:24:10.880020 | orchestrator | } 2026-02-03 02:24:10.880024 | orchestrator | 2026-02-03 02:24:10.880028 | orchestrator | + binding (known after apply) 2026-02-03 02:24:10.880031 | orchestrator | 2026-02-03 02:24:10.880035 | orchestrator | + fixed_ip { 2026-02-03 02:24:10.880039 | orchestrator | + ip_address = "192.168.16.5" 2026-02-03 02:24:10.880043 | orchestrator | + subnet_id = (known after apply) 2026-02-03 02:24:10.880047 | orchestrator | } 2026-02-03 02:24:10.880051 | orchestrator | } 2026-02-03 02:24:10.880054 | orchestrator | 2026-02-03 02:24:10.880058 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-03 02:24:10.880062 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-03 02:24:10.880066 | orchestrator | + admin_state_up = (known after apply) 2026-02-03 02:24:10.880070 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-03 02:24:10.880074 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-03 02:24:10.880077 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.880081 | orchestrator | + device_id = (known after apply) 2026-02-03 02:24:10.880085 | orchestrator | + device_owner = (known after apply) 2026-02-03 02:24:10.880089 | orchestrator | + dns_assignment = (known after apply) 2026-02-03 02:24:10.880092 | orchestrator | + dns_name = (known after apply) 2026-02-03 02:24:10.880096 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.880100 | orchestrator | + mac_address = (known after apply) 2026-02-03 02:24:10.880104 | orchestrator | + network_id = (known after apply) 2026-02-03 02:24:10.880108 | orchestrator | + port_security_enabled = (known after apply) 2026-02-03 02:24:10.880111 | orchestrator | + qos_policy_id = (known after apply) 2026-02-03 02:24:10.880115 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.880124 | orchestrator | + security_group_ids = (known after apply) 2026-02-03 02:24:10.880128 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.880132 | orchestrator | 2026-02-03 02:24:10.880136 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880140 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-03 02:24:10.880144 | orchestrator | } 2026-02-03 02:24:10.880148 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880152 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-03 02:24:10.880155 | orchestrator | } 2026-02-03 02:24:10.880159 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880163 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-03 02:24:10.880167 | orchestrator | } 2026-02-03 02:24:10.880170 | orchestrator | 2026-02-03 02:24:10.880174 | orchestrator | + binding (known after apply) 2026-02-03 02:24:10.880179 | orchestrator | 2026-02-03 02:24:10.880185 | orchestrator | + fixed_ip { 2026-02-03 02:24:10.880191 | orchestrator | + ip_address = "192.168.16.10" 2026-02-03 02:24:10.880198 | orchestrator | + subnet_id = (known after apply) 2026-02-03 02:24:10.880204 | orchestrator | } 2026-02-03 02:24:10.880210 | orchestrator | } 2026-02-03 02:24:10.880229 | orchestrator | 2026-02-03 02:24:10.880235 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-03 02:24:10.880239 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-03 02:24:10.880243 | orchestrator | + admin_state_up = (known after apply) 2026-02-03 02:24:10.880246 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-03 02:24:10.880250 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-03 02:24:10.880254 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.880258 | orchestrator | + device_id = (known after apply) 2026-02-03 02:24:10.880261 | orchestrator | + device_owner = (known after apply) 2026-02-03 02:24:10.880266 | orchestrator | + dns_assignment = (known after apply) 2026-02-03 02:24:10.880272 | orchestrator | + dns_name = (known after apply) 2026-02-03 02:24:10.880279 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.880285 | orchestrator | + mac_address = (known after apply) 2026-02-03 02:24:10.880291 | orchestrator | + network_id = (known after apply) 2026-02-03 02:24:10.880297 | orchestrator | + port_security_enabled = (known after apply) 2026-02-03 02:24:10.880303 | orchestrator | + qos_policy_id = (known after apply) 2026-02-03 02:24:10.880309 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.880315 | orchestrator | + security_group_ids = (known after apply) 2026-02-03 02:24:10.880322 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.880328 | orchestrator | 2026-02-03 02:24:10.880335 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880341 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-03 02:24:10.880347 | orchestrator | } 2026-02-03 02:24:10.880354 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880360 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-03 02:24:10.880367 | orchestrator | } 2026-02-03 02:24:10.880374 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880381 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-03 02:24:10.880388 | orchestrator | } 2026-02-03 02:24:10.880395 | orchestrator | 2026-02-03 02:24:10.880402 | orchestrator | + binding (known after apply) 2026-02-03 02:24:10.880409 | orchestrator | 2026-02-03 02:24:10.880416 | orchestrator | + fixed_ip { 2026-02-03 02:24:10.880423 | orchestrator | + ip_address = "192.168.16.11" 2026-02-03 02:24:10.880430 | orchestrator | + subnet_id = (known after apply) 2026-02-03 02:24:10.880437 | orchestrator | } 2026-02-03 02:24:10.880444 | orchestrator | } 2026-02-03 02:24:10.880451 | orchestrator | 2026-02-03 02:24:10.880458 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-03 02:24:10.880464 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-03 02:24:10.880471 | orchestrator | + admin_state_up = (known after apply) 2026-02-03 02:24:10.880477 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-03 02:24:10.880483 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-03 02:24:10.880490 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.880531 | orchestrator | + device_id = (known after apply) 2026-02-03 02:24:10.880537 | orchestrator | + device_owner = (known after apply) 2026-02-03 02:24:10.880543 | orchestrator | + dns_assignment = (known after apply) 2026-02-03 02:24:10.880550 | orchestrator | + dns_name = (known after apply) 2026-02-03 02:24:10.880560 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.880566 | orchestrator | + mac_address = (known after apply) 2026-02-03 02:24:10.880572 | orchestrator | + network_id = (known after apply) 2026-02-03 02:24:10.880579 | orchestrator | + port_security_enabled = (known after apply) 2026-02-03 02:24:10.880585 | orchestrator | + qos_policy_id = (known after apply) 2026-02-03 02:24:10.880590 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.880597 | orchestrator | + security_group_ids = (known after apply) 2026-02-03 02:24:10.880602 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.880608 | orchestrator | 2026-02-03 02:24:10.880614 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880620 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-03 02:24:10.880627 | orchestrator | } 2026-02-03 02:24:10.880633 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880639 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-03 02:24:10.880646 | orchestrator | } 2026-02-03 02:24:10.880652 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880658 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-03 02:24:10.880665 | orchestrator | } 2026-02-03 02:24:10.880671 | orchestrator | 2026-02-03 02:24:10.880677 | orchestrator | + binding (known after apply) 2026-02-03 02:24:10.880683 | orchestrator | 2026-02-03 02:24:10.880689 | orchestrator | + fixed_ip { 2026-02-03 02:24:10.880696 | orchestrator | + ip_address = "192.168.16.12" 2026-02-03 02:24:10.880702 | orchestrator | + subnet_id = (known after apply) 2026-02-03 02:24:10.880709 | orchestrator | } 2026-02-03 02:24:10.880716 | orchestrator | } 2026-02-03 02:24:10.880722 | orchestrator | 2026-02-03 02:24:10.880728 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-03 02:24:10.880734 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-03 02:24:10.880740 | orchestrator | + admin_state_up = (known after apply) 2026-02-03 02:24:10.880746 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-03 02:24:10.880752 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-03 02:24:10.880758 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.880764 | orchestrator | + device_id = (known after apply) 2026-02-03 02:24:10.880771 | orchestrator | + device_owner = (known after apply) 2026-02-03 02:24:10.880777 | orchestrator | + dns_assignment = (known after apply) 2026-02-03 02:24:10.880783 | orchestrator | + dns_name = (known after apply) 2026-02-03 02:24:10.880789 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.880796 | orchestrator | + mac_address = (known after apply) 2026-02-03 02:24:10.880802 | orchestrator | + network_id = (known after apply) 2026-02-03 02:24:10.880808 | orchestrator | + port_security_enabled = (known after apply) 2026-02-03 02:24:10.880822 | orchestrator | + qos_policy_id = (known after apply) 2026-02-03 02:24:10.880829 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.880835 | orchestrator | + security_group_ids = (known after apply) 2026-02-03 02:24:10.880841 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.880847 | orchestrator | 2026-02-03 02:24:10.880854 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880860 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-03 02:24:10.880867 | orchestrator | } 2026-02-03 02:24:10.880873 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880879 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-03 02:24:10.880886 | orchestrator | } 2026-02-03 02:24:10.880892 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.880899 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-03 02:24:10.880905 | orchestrator | } 2026-02-03 02:24:10.880911 | orchestrator | 2026-02-03 02:24:10.880923 | orchestrator | + binding (known after apply) 2026-02-03 02:24:10.880929 | orchestrator | 2026-02-03 02:24:10.880935 | orchestrator | + fixed_ip { 2026-02-03 02:24:10.880942 | orchestrator | + ip_address = "192.168.16.13" 2026-02-03 02:24:10.880948 | orchestrator | + subnet_id = (known after apply) 2026-02-03 02:24:10.880954 | orchestrator | } 2026-02-03 02:24:10.880961 | orchestrator | } 2026-02-03 02:24:10.880967 | orchestrator | 2026-02-03 02:24:10.880974 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-03 02:24:10.880980 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-03 02:24:10.880987 | orchestrator | + admin_state_up = (known after apply) 2026-02-03 02:24:10.880993 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-03 02:24:10.881000 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-03 02:24:10.881006 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.881012 | orchestrator | + device_id = (known after apply) 2026-02-03 02:24:10.881019 | orchestrator | + device_owner = (known after apply) 2026-02-03 02:24:10.881025 | orchestrator | + dns_assignment = (known after apply) 2026-02-03 02:24:10.881031 | orchestrator | + dns_name = (known after apply) 2026-02-03 02:24:10.881037 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881044 | orchestrator | + mac_address = (known after apply) 2026-02-03 02:24:10.881050 | orchestrator | + network_id = (known after apply) 2026-02-03 02:24:10.881056 | orchestrator | + port_security_enabled = (known after apply) 2026-02-03 02:24:10.881062 | orchestrator | + qos_policy_id = (known after apply) 2026-02-03 02:24:10.881068 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881075 | orchestrator | + security_group_ids = (known after apply) 2026-02-03 02:24:10.881081 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881089 | orchestrator | 2026-02-03 02:24:10.881096 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.881102 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-03 02:24:10.881109 | orchestrator | } 2026-02-03 02:24:10.881115 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.881122 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-03 02:24:10.881129 | orchestrator | } 2026-02-03 02:24:10.881135 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.881142 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-03 02:24:10.881149 | orchestrator | } 2026-02-03 02:24:10.881155 | orchestrator | 2026-02-03 02:24:10.881162 | orchestrator | + binding (known after apply) 2026-02-03 02:24:10.881168 | orchestrator | 2026-02-03 02:24:10.881174 | orchestrator | + fixed_ip { 2026-02-03 02:24:10.881181 | orchestrator | + ip_address = "192.168.16.14" 2026-02-03 02:24:10.881186 | orchestrator | + subnet_id = (known after apply) 2026-02-03 02:24:10.881192 | orchestrator | } 2026-02-03 02:24:10.881198 | orchestrator | } 2026-02-03 02:24:10.881205 | orchestrator | 2026-02-03 02:24:10.881211 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-03 02:24:10.881217 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-03 02:24:10.881223 | orchestrator | + admin_state_up = (known after apply) 2026-02-03 02:24:10.881230 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-03 02:24:10.881236 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-03 02:24:10.881243 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.881249 | orchestrator | + device_id = (known after apply) 2026-02-03 02:24:10.881256 | orchestrator | + device_owner = (known after apply) 2026-02-03 02:24:10.881262 | orchestrator | + dns_assignment = (known after apply) 2026-02-03 02:24:10.881269 | orchestrator | + dns_name = (known after apply) 2026-02-03 02:24:10.881275 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881282 | orchestrator | + mac_address = (known after apply) 2026-02-03 02:24:10.881288 | orchestrator | + network_id = (known after apply) 2026-02-03 02:24:10.881294 | orchestrator | + port_security_enabled = (known after apply) 2026-02-03 02:24:10.881300 | orchestrator | + qos_policy_id = (known after apply) 2026-02-03 02:24:10.881312 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881318 | orchestrator | + security_group_ids = (known after apply) 2026-02-03 02:24:10.881324 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881330 | orchestrator | 2026-02-03 02:24:10.881337 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.881343 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-03 02:24:10.881349 | orchestrator | } 2026-02-03 02:24:10.881356 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.881362 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-03 02:24:10.881368 | orchestrator | } 2026-02-03 02:24:10.881374 | orchestrator | + allowed_address_pairs { 2026-02-03 02:24:10.881380 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-03 02:24:10.881387 | orchestrator | } 2026-02-03 02:24:10.881393 | orchestrator | 2026-02-03 02:24:10.881405 | orchestrator | + binding (known after apply) 2026-02-03 02:24:10.881410 | orchestrator | 2026-02-03 02:24:10.881413 | orchestrator | + fixed_ip { 2026-02-03 02:24:10.881417 | orchestrator | + ip_address = "192.168.16.15" 2026-02-03 02:24:10.881421 | orchestrator | + subnet_id = (known after apply) 2026-02-03 02:24:10.881425 | orchestrator | } 2026-02-03 02:24:10.881429 | orchestrator | } 2026-02-03 02:24:10.881432 | orchestrator | 2026-02-03 02:24:10.881436 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-03 02:24:10.881440 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-03 02:24:10.881444 | orchestrator | + force_destroy = false 2026-02-03 02:24:10.881448 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881452 | orchestrator | + port_id = (known after apply) 2026-02-03 02:24:10.881455 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881459 | orchestrator | + router_id = (known after apply) 2026-02-03 02:24:10.881463 | orchestrator | + subnet_id = (known after apply) 2026-02-03 02:24:10.881467 | orchestrator | } 2026-02-03 02:24:10.881470 | orchestrator | 2026-02-03 02:24:10.881474 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-03 02:24:10.881478 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-03 02:24:10.881482 | orchestrator | + admin_state_up = (known after apply) 2026-02-03 02:24:10.881486 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.881509 | orchestrator | + availability_zone_hints = [ 2026-02-03 02:24:10.881516 | orchestrator | + "nova", 2026-02-03 02:24:10.881523 | orchestrator | ] 2026-02-03 02:24:10.881527 | orchestrator | + distributed = (known after apply) 2026-02-03 02:24:10.881531 | orchestrator | + enable_snat = (known after apply) 2026-02-03 02:24:10.881535 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-03 02:24:10.881539 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-03 02:24:10.881542 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881546 | orchestrator | + name = "testbed" 2026-02-03 02:24:10.881550 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881554 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881558 | orchestrator | 2026-02-03 02:24:10.881561 | orchestrator | + external_fixed_ip (known after apply) 2026-02-03 02:24:10.881565 | orchestrator | } 2026-02-03 02:24:10.881569 | orchestrator | 2026-02-03 02:24:10.881573 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-03 02:24:10.881577 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-03 02:24:10.881581 | orchestrator | + description = "ssh" 2026-02-03 02:24:10.881585 | orchestrator | + direction = "ingress" 2026-02-03 02:24:10.881589 | orchestrator | + ethertype = "IPv4" 2026-02-03 02:24:10.881593 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881596 | orchestrator | + port_range_max = 22 2026-02-03 02:24:10.881600 | orchestrator | + port_range_min = 22 2026-02-03 02:24:10.881604 | orchestrator | + protocol = "tcp" 2026-02-03 02:24:10.881608 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881615 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-03 02:24:10.881619 | orchestrator | + remote_group_id = (known after apply) 2026-02-03 02:24:10.881623 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-03 02:24:10.881626 | orchestrator | + security_group_id = (known after apply) 2026-02-03 02:24:10.881630 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881634 | orchestrator | } 2026-02-03 02:24:10.881638 | orchestrator | 2026-02-03 02:24:10.881642 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-03 02:24:10.881645 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-03 02:24:10.881649 | orchestrator | + description = "wireguard" 2026-02-03 02:24:10.881653 | orchestrator | + direction = "ingress" 2026-02-03 02:24:10.881656 | orchestrator | + ethertype = "IPv4" 2026-02-03 02:24:10.881660 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881664 | orchestrator | + port_range_max = 51820 2026-02-03 02:24:10.881668 | orchestrator | + port_range_min = 51820 2026-02-03 02:24:10.881671 | orchestrator | + protocol = "udp" 2026-02-03 02:24:10.881675 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881679 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-03 02:24:10.881683 | orchestrator | + remote_group_id = (known after apply) 2026-02-03 02:24:10.881687 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-03 02:24:10.881690 | orchestrator | + security_group_id = (known after apply) 2026-02-03 02:24:10.881694 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881698 | orchestrator | } 2026-02-03 02:24:10.881702 | orchestrator | 2026-02-03 02:24:10.881705 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-03 02:24:10.881709 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-03 02:24:10.881713 | orchestrator | + direction = "ingress" 2026-02-03 02:24:10.881717 | orchestrator | + ethertype = "IPv4" 2026-02-03 02:24:10.881720 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881724 | orchestrator | + protocol = "tcp" 2026-02-03 02:24:10.881728 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881732 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-03 02:24:10.881735 | orchestrator | + remote_group_id = (known after apply) 2026-02-03 02:24:10.881739 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-03 02:24:10.881743 | orchestrator | + security_group_id = (known after apply) 2026-02-03 02:24:10.881747 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881750 | orchestrator | } 2026-02-03 02:24:10.881754 | orchestrator | 2026-02-03 02:24:10.881758 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-03 02:24:10.881762 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-03 02:24:10.881766 | orchestrator | + direction = "ingress" 2026-02-03 02:24:10.881769 | orchestrator | + ethertype = "IPv4" 2026-02-03 02:24:10.881773 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881777 | orchestrator | + protocol = "udp" 2026-02-03 02:24:10.881780 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881784 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-03 02:24:10.881788 | orchestrator | + remote_group_id = (known after apply) 2026-02-03 02:24:10.881792 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-03 02:24:10.881795 | orchestrator | + security_group_id = (known after apply) 2026-02-03 02:24:10.881799 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881803 | orchestrator | } 2026-02-03 02:24:10.881806 | orchestrator | 2026-02-03 02:24:10.881810 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-03 02:24:10.881817 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-03 02:24:10.881820 | orchestrator | + direction = "ingress" 2026-02-03 02:24:10.881824 | orchestrator | + ethertype = "IPv4" 2026-02-03 02:24:10.881828 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881832 | orchestrator | + protocol = "icmp" 2026-02-03 02:24:10.881835 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881839 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-03 02:24:10.881843 | orchestrator | + remote_group_id = (known after apply) 2026-02-03 02:24:10.881847 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-03 02:24:10.881854 | orchestrator | + security_group_id = (known after apply) 2026-02-03 02:24:10.881858 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881862 | orchestrator | } 2026-02-03 02:24:10.881866 | orchestrator | 2026-02-03 02:24:10.881869 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-03 02:24:10.881873 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-03 02:24:10.881877 | orchestrator | + direction = "ingress" 2026-02-03 02:24:10.881881 | orchestrator | + ethertype = "IPv4" 2026-02-03 02:24:10.881885 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881888 | orchestrator | + protocol = "tcp" 2026-02-03 02:24:10.881892 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881898 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-03 02:24:10.881907 | orchestrator | + remote_group_id = (known after apply) 2026-02-03 02:24:10.881913 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-03 02:24:10.881918 | orchestrator | + security_group_id = (known after apply) 2026-02-03 02:24:10.881924 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881929 | orchestrator | } 2026-02-03 02:24:10.881935 | orchestrator | 2026-02-03 02:24:10.881942 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-03 02:24:10.881948 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-03 02:24:10.881955 | orchestrator | + direction = "ingress" 2026-02-03 02:24:10.881960 | orchestrator | + ethertype = "IPv4" 2026-02-03 02:24:10.881964 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.881968 | orchestrator | + protocol = "udp" 2026-02-03 02:24:10.881972 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.881976 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-03 02:24:10.881979 | orchestrator | + remote_group_id = (known after apply) 2026-02-03 02:24:10.881983 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-03 02:24:10.881987 | orchestrator | + security_group_id = (known after apply) 2026-02-03 02:24:10.881991 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.881994 | orchestrator | } 2026-02-03 02:24:10.881998 | orchestrator | 2026-02-03 02:24:10.882002 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-03 02:24:10.882006 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-03 02:24:10.882009 | orchestrator | + direction = "ingress" 2026-02-03 02:24:10.884913 | orchestrator | + ethertype = "IPv4" 2026-02-03 02:24:10.885825 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.885841 | orchestrator | + protocol = "icmp" 2026-02-03 02:24:10.885848 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.885855 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-03 02:24:10.885861 | orchestrator | + remote_group_id = (known after apply) 2026-02-03 02:24:10.885868 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-03 02:24:10.885875 | orchestrator | + security_group_id = (known after apply) 2026-02-03 02:24:10.885881 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.885900 | orchestrator | } 2026-02-03 02:24:10.885907 | orchestrator | 2026-02-03 02:24:10.885914 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-03 02:24:10.885922 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-03 02:24:10.885930 | orchestrator | + description = "vrrp" 2026-02-03 02:24:10.885937 | orchestrator | + direction = "ingress" 2026-02-03 02:24:10.885943 | orchestrator | + ethertype = "IPv4" 2026-02-03 02:24:10.885950 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.885956 | orchestrator | + protocol = "112" 2026-02-03 02:24:10.885963 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.885969 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-03 02:24:10.885976 | orchestrator | + remote_group_id = (known after apply) 2026-02-03 02:24:10.885982 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-03 02:24:10.885989 | orchestrator | + security_group_id = (known after apply) 2026-02-03 02:24:10.885996 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.886003 | orchestrator | } 2026-02-03 02:24:10.886009 | orchestrator | 2026-02-03 02:24:10.886040 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-03 02:24:10.886047 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-03 02:24:10.886053 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.886059 | orchestrator | + description = "management security group" 2026-02-03 02:24:10.886065 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.886071 | orchestrator | + name = "testbed-management" 2026-02-03 02:24:10.886077 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.886084 | orchestrator | + stateful = (known after apply) 2026-02-03 02:24:10.886091 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.886098 | orchestrator | } 2026-02-03 02:24:10.886104 | orchestrator | 2026-02-03 02:24:10.886111 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-03 02:24:10.886118 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-03 02:24:10.886125 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.886131 | orchestrator | + description = "node security group" 2026-02-03 02:24:10.886138 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.886144 | orchestrator | + name = "testbed-node" 2026-02-03 02:24:10.886151 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.886157 | orchestrator | + stateful = (known after apply) 2026-02-03 02:24:10.886163 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.886169 | orchestrator | } 2026-02-03 02:24:10.886175 | orchestrator | 2026-02-03 02:24:10.886181 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-03 02:24:10.886187 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-03 02:24:10.886193 | orchestrator | + all_tags = (known after apply) 2026-02-03 02:24:10.886199 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-03 02:24:10.886206 | orchestrator | + dns_nameservers = [ 2026-02-03 02:24:10.886212 | orchestrator | + "8.8.8.8", 2026-02-03 02:24:10.886218 | orchestrator | + "9.9.9.9", 2026-02-03 02:24:10.886224 | orchestrator | ] 2026-02-03 02:24:10.886231 | orchestrator | + enable_dhcp = true 2026-02-03 02:24:10.886252 | orchestrator | + gateway_ip = (known after apply) 2026-02-03 02:24:10.886258 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.886264 | orchestrator | + ip_version = 4 2026-02-03 02:24:10.886271 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-03 02:24:10.886277 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-03 02:24:10.886283 | orchestrator | + name = "subnet-testbed-management" 2026-02-03 02:24:10.886290 | orchestrator | + network_id = (known after apply) 2026-02-03 02:24:10.886295 | orchestrator | + no_gateway = false 2026-02-03 02:24:10.886299 | orchestrator | + region = (known after apply) 2026-02-03 02:24:10.886303 | orchestrator | + service_types = (known after apply) 2026-02-03 02:24:10.886314 | orchestrator | + tenant_id = (known after apply) 2026-02-03 02:24:10.886318 | orchestrator | 2026-02-03 02:24:10.886322 | orchestrator | + allocation_pool { 2026-02-03 02:24:10.886325 | orchestrator | + end = "192.168.31.250" 2026-02-03 02:24:10.886329 | orchestrator | + start = "192.168.31.200" 2026-02-03 02:24:10.886333 | orchestrator | } 2026-02-03 02:24:10.886337 | orchestrator | } 2026-02-03 02:24:10.886341 | orchestrator | 2026-02-03 02:24:10.886344 | orchestrator | # terraform_data.image will be created 2026-02-03 02:24:10.886348 | orchestrator | + resource "terraform_data" "image" { 2026-02-03 02:24:10.886352 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.886356 | orchestrator | + input = "Ubuntu 24.04" 2026-02-03 02:24:10.886359 | orchestrator | + output = (known after apply) 2026-02-03 02:24:10.886363 | orchestrator | } 2026-02-03 02:24:10.886367 | orchestrator | 2026-02-03 02:24:10.886371 | orchestrator | # terraform_data.image_node will be created 2026-02-03 02:24:10.886374 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-03 02:24:10.886378 | orchestrator | + id = (known after apply) 2026-02-03 02:24:10.886382 | orchestrator | + input = "Ubuntu 24.04" 2026-02-03 02:24:10.886385 | orchestrator | + output = (known after apply) 2026-02-03 02:24:10.886389 | orchestrator | } 2026-02-03 02:24:10.886393 | orchestrator | 2026-02-03 02:24:10.886397 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-03 02:24:10.886400 | orchestrator | 2026-02-03 02:24:10.886404 | orchestrator | Changes to Outputs: 2026-02-03 02:24:10.886408 | orchestrator | + manager_address = (sensitive value) 2026-02-03 02:24:10.886412 | orchestrator | + private_key = (sensitive value) 2026-02-03 02:24:11.062674 | orchestrator | terraform_data.image: Creating... 2026-02-03 02:24:11.062894 | orchestrator | terraform_data.image_node: Creating... 2026-02-03 02:24:11.063083 | orchestrator | terraform_data.image: Creation complete after 0s [id=0143587c-f50b-ee51-52f1-104cc1b72760] 2026-02-03 02:24:11.063772 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=6e90db35-a87c-bf7c-4103-c88aa3442071] 2026-02-03 02:24:11.077852 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-03 02:24:11.081245 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-03 02:24:11.100390 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-03 02:24:11.100447 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-03 02:24:11.100593 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-03 02:24:11.101564 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-03 02:24:11.101588 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-03 02:24:11.103071 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-03 02:24:11.103095 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-03 02:24:11.116275 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-03 02:24:11.565014 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-03 02:24:11.568187 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-03 02:24:11.594049 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-03 02:24:11.598271 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-03 02:24:11.608926 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-03 02:24:11.612237 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-03 02:24:12.566602 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=36b15332-d0e0-4b8b-9106-a88f3db34558] 2026-02-03 02:24:12.570717 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-03 02:24:14.703566 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=15b94581-7087-40af-83f2-cd9970e768be] 2026-02-03 02:24:14.707657 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-03 02:24:14.710985 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=8097be92-44ca-4be8-a1da-39ba5887696e] 2026-02-03 02:24:14.713521 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-03 02:24:14.726046 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=1ed5f26b-b68f-43a9-951f-f4acae255308] 2026-02-03 02:24:14.729891 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=6b074c22-654d-40e5-9251-7e10d9fad41a] 2026-02-03 02:24:14.734641 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-03 02:24:14.742087 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=b4cf4752-8315-482c-8a5b-0aee9859091f] 2026-02-03 02:24:14.745417 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-03 02:24:14.748568 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-03 02:24:14.760486 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=f58f055b-eadc-4fe1-a72e-2d1917f1f2dd] 2026-02-03 02:24:14.768653 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-03 02:24:14.810674 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=a2e14d93-a486-403c-9c37-4f6de49ddee5] 2026-02-03 02:24:14.817262 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-03 02:24:14.817661 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=0bcbc917-1e4e-4947-8603-c7f49bd04ea8] 2026-02-03 02:24:14.822966 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=a36c722901a92fe9fd3eb2a5b6dc765a490d09b6] 2026-02-03 02:24:14.827738 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-03 02:24:14.831827 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-03 02:24:14.836770 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=9be81b275a699269071a60309464048cd5c504e5] 2026-02-03 02:24:14.980965 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=30942d1f-f704-43cc-bdd1-e5a5821a35c3] 2026-02-03 02:24:15.701565 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=ada34498-f790-4618-b7fe-2e01ab887a1e] 2026-02-03 02:24:15.705472 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-03 02:24:15.883810 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=8b2ebf21-73a8-4948-82f3-6debf7de46ad] 2026-02-03 02:24:18.071446 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=24352e15-6dea-4a0f-b242-96c62f6cf142] 2026-02-03 02:24:18.114220 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=1e34e583-c935-4574-8990-e89cac137457] 2026-02-03 02:24:18.151789 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=26fa6d1d-7884-464f-aecb-162ac10d2371] 2026-02-03 02:24:18.162679 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=9ac79520-7901-4e67-81d0-fc013cb298e8] 2026-02-03 02:24:18.181846 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=5699a710-abd3-43e6-8d32-dcb1e0ac0cbe] 2026-02-03 02:24:18.202631 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=baaa19a3-0465-4ace-be40-0edae040cc8f] 2026-02-03 02:24:18.989237 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=485b2a52-3dab-4ccc-85bb-d62dd1140ec0] 2026-02-03 02:24:18.993615 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-03 02:24:18.994059 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-03 02:24:18.995025 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-03 02:24:19.171883 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=ad17f7dd-17f4-4087-bdef-5c53d7ff740d] 2026-02-03 02:24:19.179732 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-03 02:24:19.179780 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-03 02:24:19.180203 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-03 02:24:19.180763 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-03 02:24:19.180947 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-03 02:24:19.181703 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-03 02:24:19.186864 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-03 02:24:19.189349 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-03 02:24:19.202130 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=c6df6e8a-b69b-4d46-bbc6-df85672982e4] 2026-02-03 02:24:19.207188 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-03 02:24:19.449514 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=ca2a88fb-bc5f-4115-bc43-36445e97520c] 2026-02-03 02:24:19.454780 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-03 02:24:19.640617 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=9ef67dbd-3831-488a-896f-6b637b0be83a] 2026-02-03 02:24:19.646948 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-03 02:24:19.785206 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=d92f8eb0-8450-4a8f-99ef-84ad3fe34707] 2026-02-03 02:24:19.792754 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-03 02:24:19.799595 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=778f5f40-785f-4033-a3e8-7b047a8e0600] 2026-02-03 02:24:19.804001 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-03 02:24:19.828930 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=9e9ab637-4635-4621-8af6-8c02c2574b5c] 2026-02-03 02:24:19.833554 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-03 02:24:19.842071 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=70f312c3-a754-4a94-a2fd-54d7e3dd32e1] 2026-02-03 02:24:19.846613 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-03 02:24:19.880095 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=32c59f2c-7e80-49fc-a0bf-831c0b517e76] 2026-02-03 02:24:19.886910 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-03 02:24:19.887157 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=155063e3-d277-4343-b275-682d8b740af6] 2026-02-03 02:24:19.940739 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=e10d86f4-5d24-4332-b32b-b65872c5b3fd] 2026-02-03 02:24:20.014125 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=691c149d-72be-4225-91b4-8747ac971851] 2026-02-03 02:24:20.099270 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=accf8e6b-c7a6-4e34-9ae8-3636b7a39c80] 2026-02-03 02:24:20.242237 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=ceacb22f-313e-444f-a0e4-8ebcd1e2e498] 2026-02-03 02:24:20.343862 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=8b11fe9b-725a-4bb2-9ee9-80b5b7874ca3] 2026-02-03 02:24:20.413723 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=655347a7-d6d4-4584-9fcf-28aa4b6b4bf6] 2026-02-03 02:24:20.533052 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=1dde807b-9cf2-42da-a9d9-489794422608] 2026-02-03 02:24:20.589880 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=6e57457f-d663-4880-818c-4e9f81a35d85] 2026-02-03 02:24:21.585516 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=e6190e7c-79af-47ca-9732-71fd484cec39] 2026-02-03 02:24:21.603619 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-03 02:24:21.618464 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-03 02:24:21.619030 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-03 02:24:21.625722 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-03 02:24:21.626659 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-03 02:24:21.640910 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-03 02:24:21.644118 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-03 02:24:23.426841 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=ecc4e38a-c705-48fe-9933-15cf0d2d94f2] 2026-02-03 02:24:24.735828 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-03 02:24:24.735886 | orchestrator | local_file.inventory: Creating... 2026-02-03 02:24:24.735896 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-03 02:24:24.735904 | orchestrator | local_file.inventory: Creation complete after 1s [id=a6cc5fa739c63fa084c74d51c0d00a7fb3eef83e] 2026-02-03 02:24:24.735912 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 1s [id=8b10bab329e7687faa6afac2a0fd4ced05001755] 2026-02-03 02:24:24.739054 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=ecc4e38a-c705-48fe-9933-15cf0d2d94f2] 2026-02-03 02:24:31.619324 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-03 02:24:31.619380 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-03 02:24:31.630658 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-03 02:24:31.630712 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-03 02:24:31.643751 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-03 02:24:31.646008 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-03 02:24:41.620022 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-03 02:24:41.620095 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-03 02:24:41.631297 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-03 02:24:41.631348 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-03 02:24:41.644490 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-03 02:24:41.646697 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-03 02:24:42.116890 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=c108d1a7-d57b-479a-b7e2-bc64f0828d86] 2026-02-03 02:24:42.134523 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=87d6dca3-5454-4fe1-951a-5238126d977a] 2026-02-03 02:24:42.174167 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=de32b492-5db0-47a7-b09f-18f7dc03b2aa] 2026-02-03 02:24:42.191866 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=080d067c-6eab-42a3-af61-3d81c6fe5eef] 2026-02-03 02:24:51.620671 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-03 02:24:51.645622 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-03 02:24:52.452017 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=222458ab-94b2-4b6e-b0ec-5aea0821415b] 2026-02-03 02:24:52.511225 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=24fb0d47-a309-4b5a-a205-bbaeec6a366f] 2026-02-03 02:24:52.522659 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-03 02:24:52.528606 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-03 02:24:52.528681 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-03 02:24:52.540169 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4761273005992506945] 2026-02-03 02:24:52.541426 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-03 02:24:52.542178 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-03 02:24:52.542796 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-03 02:24:52.544779 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-03 02:24:52.551011 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-03 02:24:52.562589 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-03 02:24:52.565906 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-03 02:24:52.572629 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-03 02:24:55.921637 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=de32b492-5db0-47a7-b09f-18f7dc03b2aa/15b94581-7087-40af-83f2-cd9970e768be] 2026-02-03 02:24:55.939348 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=24fb0d47-a309-4b5a-a205-bbaeec6a366f/30942d1f-f704-43cc-bdd1-e5a5821a35c3] 2026-02-03 02:24:55.952200 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=c108d1a7-d57b-479a-b7e2-bc64f0828d86/1ed5f26b-b68f-43a9-951f-f4acae255308] 2026-02-03 02:24:55.966938 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=c108d1a7-d57b-479a-b7e2-bc64f0828d86/0bcbc917-1e4e-4947-8603-c7f49bd04ea8] 2026-02-03 02:24:55.968127 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=de32b492-5db0-47a7-b09f-18f7dc03b2aa/f58f055b-eadc-4fe1-a72e-2d1917f1f2dd] 2026-02-03 02:24:55.981166 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=24fb0d47-a309-4b5a-a205-bbaeec6a366f/8097be92-44ca-4be8-a1da-39ba5887696e] 2026-02-03 02:25:02.049671 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=de32b492-5db0-47a7-b09f-18f7dc03b2aa/6b074c22-654d-40e5-9251-7e10d9fad41a] 2026-02-03 02:25:02.070615 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=24fb0d47-a309-4b5a-a205-bbaeec6a366f/b4cf4752-8315-482c-8a5b-0aee9859091f] 2026-02-03 02:25:02.087665 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=c108d1a7-d57b-479a-b7e2-bc64f0828d86/a2e14d93-a486-403c-9c37-4f6de49ddee5] 2026-02-03 02:25:02.575707 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-03 02:25:12.576641 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-03 02:25:13.382213 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=591c8101-1af8-4129-8111-70e28e1ff05d] 2026-02-03 02:25:13.397079 | orchestrator | 2026-02-03 02:25:13.397133 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-03 02:25:13.397139 | orchestrator | 2026-02-03 02:25:13.397143 | orchestrator | Outputs: 2026-02-03 02:25:13.397148 | orchestrator | 2026-02-03 02:25:13.397157 | orchestrator | manager_address = 2026-02-03 02:25:13.397161 | orchestrator | private_key = 2026-02-03 02:25:13.764148 | orchestrator | ok: Runtime: 0:01:07.080932 2026-02-03 02:25:13.796358 | 2026-02-03 02:25:13.796478 | TASK [Fetch manager address] 2026-02-03 02:25:14.264713 | orchestrator | ok 2026-02-03 02:25:14.276603 | 2026-02-03 02:25:14.276769 | TASK [Set manager_host address] 2026-02-03 02:25:14.340593 | orchestrator | ok 2026-02-03 02:25:14.347700 | 2026-02-03 02:25:14.347813 | LOOP [Update ansible collections] 2026-02-03 02:25:18.144518 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-03 02:25:18.144982 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-03 02:25:18.145064 | orchestrator | Starting galaxy collection install process 2026-02-03 02:25:18.145119 | orchestrator | Process install dependency map 2026-02-03 02:25:18.145182 | orchestrator | Starting collection install process 2026-02-03 02:25:18.145229 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-03 02:25:18.145269 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-03 02:25:18.145312 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-03 02:25:18.145430 | orchestrator | ok: Item: commons Runtime: 0:00:03.475263 2026-02-03 02:25:19.646733 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-03 02:25:19.646942 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-03 02:25:19.646999 | orchestrator | Starting galaxy collection install process 2026-02-03 02:25:19.647039 | orchestrator | Process install dependency map 2026-02-03 02:25:19.647076 | orchestrator | Starting collection install process 2026-02-03 02:25:19.647112 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-03 02:25:19.647147 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-03 02:25:19.647179 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-03 02:25:19.647232 | orchestrator | ok: Item: services Runtime: 0:00:01.181184 2026-02-03 02:25:19.669376 | 2026-02-03 02:25:19.669555 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-03 02:25:30.277437 | orchestrator | ok 2026-02-03 02:25:30.290381 | 2026-02-03 02:25:30.290572 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-03 02:26:30.343026 | orchestrator | ok 2026-02-03 02:26:30.354694 | 2026-02-03 02:26:30.354817 | TASK [Fetch manager ssh hostkey] 2026-02-03 02:26:31.931455 | orchestrator | Output suppressed because no_log was given 2026-02-03 02:26:31.946488 | 2026-02-03 02:26:31.946687 | TASK [Get ssh keypair from terraform environment] 2026-02-03 02:26:32.481252 | orchestrator | ok: Runtime: 0:00:00.008503 2026-02-03 02:26:32.498685 | 2026-02-03 02:26:32.498872 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-03 02:26:32.549145 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-03 02:26:32.559740 | 2026-02-03 02:26:32.559933 | TASK [Run manager part 0] 2026-02-03 02:26:34.343002 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-03 02:26:34.409131 | orchestrator | 2026-02-03 02:26:34.409172 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-03 02:26:34.409182 | orchestrator | 2026-02-03 02:26:34.409198 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-03 02:26:36.345342 | orchestrator | ok: [testbed-manager] 2026-02-03 02:26:36.345386 | orchestrator | 2026-02-03 02:26:36.345430 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-03 02:26:36.345444 | orchestrator | 2026-02-03 02:26:36.345456 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 02:26:38.433447 | orchestrator | ok: [testbed-manager] 2026-02-03 02:26:38.433504 | orchestrator | 2026-02-03 02:26:38.433515 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-03 02:26:39.150265 | orchestrator | ok: [testbed-manager] 2026-02-03 02:26:39.150312 | orchestrator | 2026-02-03 02:26:39.150322 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-03 02:26:39.205938 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:26:39.205983 | orchestrator | 2026-02-03 02:26:39.205997 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-03 02:26:39.242602 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:26:39.242650 | orchestrator | 2026-02-03 02:26:39.242663 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-03 02:26:39.284471 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:26:39.284521 | orchestrator | 2026-02-03 02:26:39.284528 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-03 02:26:39.318662 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:26:39.318709 | orchestrator | 2026-02-03 02:26:39.318718 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-03 02:26:39.354145 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:26:39.354202 | orchestrator | 2026-02-03 02:26:39.354214 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-03 02:26:39.391315 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:26:39.391439 | orchestrator | 2026-02-03 02:26:39.391452 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-03 02:26:39.425471 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:26:39.425531 | orchestrator | 2026-02-03 02:26:39.425542 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-03 02:26:40.114353 | orchestrator | changed: [testbed-manager] 2026-02-03 02:26:40.114399 | orchestrator | 2026-02-03 02:26:40.114426 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-03 02:29:29.412835 | orchestrator | changed: [testbed-manager] 2026-02-03 02:29:29.412892 | orchestrator | 2026-02-03 02:29:29.412902 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-03 02:32:07.467137 | orchestrator | changed: [testbed-manager] 2026-02-03 02:32:07.467220 | orchestrator | 2026-02-03 02:32:07.467232 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-03 02:32:31.157669 | orchestrator | changed: [testbed-manager] 2026-02-03 02:32:31.157765 | orchestrator | 2026-02-03 02:32:31.157778 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-03 02:32:40.013917 | orchestrator | changed: [testbed-manager] 2026-02-03 02:32:40.014004 | orchestrator | 2026-02-03 02:32:40.014048 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-03 02:32:40.051997 | orchestrator | ok: [testbed-manager] 2026-02-03 02:32:40.052094 | orchestrator | 2026-02-03 02:32:40.052107 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-03 02:32:40.859998 | orchestrator | ok: [testbed-manager] 2026-02-03 02:32:40.860059 | orchestrator | 2026-02-03 02:32:40.860068 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-03 02:32:41.542542 | orchestrator | changed: [testbed-manager] 2026-02-03 02:32:41.543109 | orchestrator | 2026-02-03 02:32:41.543128 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-03 02:32:47.729529 | orchestrator | changed: [testbed-manager] 2026-02-03 02:32:47.729609 | orchestrator | 2026-02-03 02:32:47.729640 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-03 02:32:53.471331 | orchestrator | changed: [testbed-manager] 2026-02-03 02:32:53.472037 | orchestrator | 2026-02-03 02:32:53.472068 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-03 02:32:56.025497 | orchestrator | changed: [testbed-manager] 2026-02-03 02:32:56.025559 | orchestrator | 2026-02-03 02:32:56.025568 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-03 02:32:57.703502 | orchestrator | changed: [testbed-manager] 2026-02-03 02:32:57.703561 | orchestrator | 2026-02-03 02:32:57.703572 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-03 02:32:58.710275 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-03 02:32:58.710387 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-03 02:32:58.710398 | orchestrator | 2026-02-03 02:32:58.710407 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-03 02:32:58.752796 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-03 02:32:58.752862 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-03 02:32:58.752872 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-03 02:32:58.752879 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-03 02:33:06.510975 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-03 02:33:06.511161 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-03 02:33:06.511177 | orchestrator | 2026-02-03 02:33:06.511182 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-03 02:33:07.055579 | orchestrator | changed: [testbed-manager] 2026-02-03 02:33:07.055660 | orchestrator | 2026-02-03 02:33:07.055695 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-03 02:35:26.859257 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-03 02:35:26.859373 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-03 02:35:26.859389 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-03 02:35:26.859397 | orchestrator | 2026-02-03 02:35:26.859405 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-03 02:35:29.360361 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-03 02:35:29.360443 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-03 02:35:29.360456 | orchestrator | 2026-02-03 02:35:29.360468 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-03 02:35:29.360480 | orchestrator | 2026-02-03 02:35:29.360491 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 02:35:30.849750 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:30.849822 | orchestrator | 2026-02-03 02:35:30.849831 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-03 02:35:30.896820 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:30.896921 | orchestrator | 2026-02-03 02:35:30.896937 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-03 02:35:30.961512 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:30.961597 | orchestrator | 2026-02-03 02:35:30.961608 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-03 02:35:31.797753 | orchestrator | changed: [testbed-manager] 2026-02-03 02:35:31.797835 | orchestrator | 2026-02-03 02:35:31.797847 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-03 02:35:32.580101 | orchestrator | changed: [testbed-manager] 2026-02-03 02:35:32.580191 | orchestrator | 2026-02-03 02:35:32.580205 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-03 02:35:34.020998 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-03 02:35:34.021081 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-03 02:35:34.021122 | orchestrator | 2026-02-03 02:35:34.021165 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-03 02:35:35.372087 | orchestrator | changed: [testbed-manager] 2026-02-03 02:35:35.372175 | orchestrator | 2026-02-03 02:35:35.372184 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-03 02:35:37.215965 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-03 02:35:37.216569 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-03 02:35:37.216585 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-03 02:35:37.216591 | orchestrator | 2026-02-03 02:35:37.216599 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-03 02:35:37.260973 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:35:37.261052 | orchestrator | 2026-02-03 02:35:37.261064 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-03 02:35:37.332465 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:35:37.332549 | orchestrator | 2026-02-03 02:35:37.332565 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-03 02:35:37.907005 | orchestrator | changed: [testbed-manager] 2026-02-03 02:35:37.907064 | orchestrator | 2026-02-03 02:35:37.907074 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-03 02:35:37.982345 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:35:37.982397 | orchestrator | 2026-02-03 02:35:37.982404 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-03 02:35:38.898126 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-03 02:35:38.898184 | orchestrator | changed: [testbed-manager] 2026-02-03 02:35:38.898193 | orchestrator | 2026-02-03 02:35:38.898201 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-03 02:35:38.930912 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:35:38.930964 | orchestrator | 2026-02-03 02:35:38.930971 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-03 02:35:38.956665 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:35:38.956711 | orchestrator | 2026-02-03 02:35:38.956717 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-03 02:35:38.990771 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:35:38.990829 | orchestrator | 2026-02-03 02:35:38.990839 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-03 02:35:39.063716 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:35:39.063777 | orchestrator | 2026-02-03 02:35:39.063786 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-03 02:35:39.788379 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:39.788445 | orchestrator | 2026-02-03 02:35:39.788456 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-03 02:35:39.788465 | orchestrator | 2026-02-03 02:35:39.788474 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 02:35:41.223171 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:41.223239 | orchestrator | 2026-02-03 02:35:41.223253 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-03 02:35:42.182834 | orchestrator | changed: [testbed-manager] 2026-02-03 02:35:42.182888 | orchestrator | 2026-02-03 02:35:42.182898 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:35:42.182906 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-03 02:35:42.182913 | orchestrator | 2026-02-03 02:35:42.396436 | orchestrator | ok: Runtime: 0:09:09.379928 2026-02-03 02:35:42.413859 | 2026-02-03 02:35:42.413985 | TASK [Point out that the log in on the manager is now possible] 2026-02-03 02:35:42.462409 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-03 02:35:42.473502 | 2026-02-03 02:35:42.473704 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-03 02:35:42.512154 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-03 02:35:42.522257 | 2026-02-03 02:35:42.522381 | TASK [Run manager part 1 + 2] 2026-02-03 02:35:43.515992 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-03 02:35:43.576158 | orchestrator | 2026-02-03 02:35:43.576257 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-03 02:35:43.576275 | orchestrator | 2026-02-03 02:35:43.576338 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 02:35:46.691056 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:46.691144 | orchestrator | 2026-02-03 02:35:46.691212 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-03 02:35:46.729667 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:35:46.729734 | orchestrator | 2026-02-03 02:35:46.729755 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-03 02:35:46.766963 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:46.767006 | orchestrator | 2026-02-03 02:35:46.767018 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-03 02:35:46.820528 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:46.820595 | orchestrator | 2026-02-03 02:35:46.820606 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-03 02:35:46.892523 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:46.892583 | orchestrator | 2026-02-03 02:35:46.892597 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-03 02:35:46.954788 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:46.954859 | orchestrator | 2026-02-03 02:35:46.954878 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-03 02:35:47.008771 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-03 02:35:47.008817 | orchestrator | 2026-02-03 02:35:47.008824 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-03 02:35:47.782261 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:47.782374 | orchestrator | 2026-02-03 02:35:47.782392 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-03 02:35:47.823553 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:35:47.823601 | orchestrator | 2026-02-03 02:35:47.823609 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-03 02:35:49.271368 | orchestrator | changed: [testbed-manager] 2026-02-03 02:35:49.271452 | orchestrator | 2026-02-03 02:35:49.271473 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-03 02:35:49.869093 | orchestrator | ok: [testbed-manager] 2026-02-03 02:35:49.869153 | orchestrator | 2026-02-03 02:35:49.869165 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-03 02:35:51.065450 | orchestrator | changed: [testbed-manager] 2026-02-03 02:35:51.065504 | orchestrator | 2026-02-03 02:35:51.065516 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-03 02:36:07.632447 | orchestrator | changed: [testbed-manager] 2026-02-03 02:36:07.632577 | orchestrator | 2026-02-03 02:36:07.632596 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-03 02:36:08.326196 | orchestrator | ok: [testbed-manager] 2026-02-03 02:36:08.326244 | orchestrator | 2026-02-03 02:36:08.326254 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-03 02:36:08.378053 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:36:08.378145 | orchestrator | 2026-02-03 02:36:08.378167 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-03 02:36:09.370193 | orchestrator | changed: [testbed-manager] 2026-02-03 02:36:09.370232 | orchestrator | 2026-02-03 02:36:09.370238 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-03 02:36:10.333947 | orchestrator | changed: [testbed-manager] 2026-02-03 02:36:10.334047 | orchestrator | 2026-02-03 02:36:10.334062 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-03 02:36:10.945694 | orchestrator | changed: [testbed-manager] 2026-02-03 02:36:10.945834 | orchestrator | 2026-02-03 02:36:10.945865 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-03 02:36:10.985512 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-03 02:36:10.985619 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-03 02:36:10.985628 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-03 02:36:10.985634 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-03 02:36:13.318099 | orchestrator | changed: [testbed-manager] 2026-02-03 02:36:13.318169 | orchestrator | 2026-02-03 02:36:13.318177 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-03 02:36:22.516129 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-03 02:36:22.516217 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-03 02:36:22.516230 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-03 02:36:22.516238 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-03 02:36:22.516253 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-03 02:36:22.516261 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-03 02:36:22.516290 | orchestrator | 2026-02-03 02:36:22.516299 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-03 02:36:23.618121 | orchestrator | changed: [testbed-manager] 2026-02-03 02:36:23.618193 | orchestrator | 2026-02-03 02:36:23.618205 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-03 02:36:23.661358 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:36:23.661434 | orchestrator | 2026-02-03 02:36:23.661444 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-03 02:36:27.072164 | orchestrator | changed: [testbed-manager] 2026-02-03 02:36:27.072248 | orchestrator | 2026-02-03 02:36:27.072396 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-03 02:36:27.110104 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:36:27.110190 | orchestrator | 2026-02-03 02:36:27.110208 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-03 02:38:15.187055 | orchestrator | changed: [testbed-manager] 2026-02-03 02:38:15.187095 | orchestrator | 2026-02-03 02:38:15.187103 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-03 02:38:16.424725 | orchestrator | ok: [testbed-manager] 2026-02-03 02:38:16.424782 | orchestrator | 2026-02-03 02:38:16.424792 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:38:16.424802 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-03 02:38:16.424810 | orchestrator | 2026-02-03 02:38:16.659147 | orchestrator | ok: Runtime: 0:02:33.666353 2026-02-03 02:38:16.677955 | 2026-02-03 02:38:16.678181 | TASK [Reboot manager] 2026-02-03 02:38:18.214608 | orchestrator | ok: Runtime: 0:00:01.079504 2026-02-03 02:38:18.233516 | 2026-02-03 02:38:18.233683 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-03 02:38:34.696320 | orchestrator | ok 2026-02-03 02:38:34.708226 | 2026-02-03 02:38:34.708359 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-03 02:39:34.759807 | orchestrator | ok 2026-02-03 02:39:34.769835 | 2026-02-03 02:39:34.769959 | TASK [Deploy manager + bootstrap nodes] 2026-02-03 02:39:37.505134 | orchestrator | 2026-02-03 02:39:37.505310 | orchestrator | # DEPLOY MANAGER 2026-02-03 02:39:37.505330 | orchestrator | 2026-02-03 02:39:37.505340 | orchestrator | + set -e 2026-02-03 02:39:37.505348 | orchestrator | + echo 2026-02-03 02:39:37.505357 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-03 02:39:37.505369 | orchestrator | + echo 2026-02-03 02:39:37.505413 | orchestrator | + cat /opt/manager-vars.sh 2026-02-03 02:39:37.508922 | orchestrator | export NUMBER_OF_NODES=6 2026-02-03 02:39:37.508982 | orchestrator | 2026-02-03 02:39:37.508999 | orchestrator | export CEPH_VERSION=reef 2026-02-03 02:39:37.509013 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-03 02:39:37.509025 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-03 02:39:37.509055 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-03 02:39:37.509070 | orchestrator | 2026-02-03 02:39:37.509088 | orchestrator | export ARA=false 2026-02-03 02:39:37.509101 | orchestrator | export DEPLOY_MODE=manager 2026-02-03 02:39:37.509118 | orchestrator | export TEMPEST=false 2026-02-03 02:39:37.509133 | orchestrator | export IS_ZUUL=true 2026-02-03 02:39:37.509145 | orchestrator | 2026-02-03 02:39:37.509164 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 02:39:37.509177 | orchestrator | export EXTERNAL_API=false 2026-02-03 02:39:37.509190 | orchestrator | 2026-02-03 02:39:37.509199 | orchestrator | export IMAGE_USER=ubuntu 2026-02-03 02:39:37.509210 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-03 02:39:37.509217 | orchestrator | 2026-02-03 02:39:37.509245 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-03 02:39:37.509253 | orchestrator | 2026-02-03 02:39:37.509261 | orchestrator | + echo 2026-02-03 02:39:37.509270 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 02:39:37.509894 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 02:39:37.509915 | orchestrator | ++ INTERACTIVE=false 2026-02-03 02:39:37.509930 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 02:39:37.509946 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 02:39:37.510052 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 02:39:37.510072 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 02:39:37.510086 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 02:39:37.510100 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 02:39:37.510114 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 02:39:37.510127 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 02:39:37.510141 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 02:39:37.510154 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 02:39:37.510168 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 02:39:37.510182 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 02:39:37.510209 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 02:39:37.510255 | orchestrator | ++ export ARA=false 2026-02-03 02:39:37.510270 | orchestrator | ++ ARA=false 2026-02-03 02:39:37.510283 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 02:39:37.510296 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 02:39:37.510312 | orchestrator | ++ export TEMPEST=false 2026-02-03 02:39:37.510333 | orchestrator | ++ TEMPEST=false 2026-02-03 02:39:37.510347 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 02:39:37.510361 | orchestrator | ++ IS_ZUUL=true 2026-02-03 02:39:37.510374 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 02:39:37.510388 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 02:39:37.510403 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 02:39:37.510417 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 02:39:37.510432 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 02:39:37.510446 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 02:39:37.510461 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 02:39:37.510475 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 02:39:37.510489 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 02:39:37.510504 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 02:39:37.510518 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-03 02:39:37.574638 | orchestrator | + docker version 2026-02-03 02:39:37.863121 | orchestrator | Client: Docker Engine - Community 2026-02-03 02:39:37.863292 | orchestrator | Version: 27.5.1 2026-02-03 02:39:37.863320 | orchestrator | API version: 1.47 2026-02-03 02:39:37.863331 | orchestrator | Go version: go1.22.11 2026-02-03 02:39:37.863344 | orchestrator | Git commit: 9f9e405 2026-02-03 02:39:37.863358 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-03 02:39:37.863379 | orchestrator | OS/Arch: linux/amd64 2026-02-03 02:39:37.863398 | orchestrator | Context: default 2026-02-03 02:39:37.863412 | orchestrator | 2026-02-03 02:39:37.863426 | orchestrator | Server: Docker Engine - Community 2026-02-03 02:39:37.863439 | orchestrator | Engine: 2026-02-03 02:39:37.863453 | orchestrator | Version: 27.5.1 2026-02-03 02:39:37.863467 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-03 02:39:37.863525 | orchestrator | Go version: go1.22.11 2026-02-03 02:39:37.863540 | orchestrator | Git commit: 4c9b3b0 2026-02-03 02:39:37.864109 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-03 02:39:37.864152 | orchestrator | OS/Arch: linux/amd64 2026-02-03 02:39:37.864161 | orchestrator | Experimental: false 2026-02-03 02:39:37.864170 | orchestrator | containerd: 2026-02-03 02:39:37.864180 | orchestrator | Version: v2.2.1 2026-02-03 02:39:37.864189 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-03 02:39:37.864198 | orchestrator | runc: 2026-02-03 02:39:37.864207 | orchestrator | Version: 1.3.4 2026-02-03 02:39:37.864216 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-03 02:39:37.864261 | orchestrator | docker-init: 2026-02-03 02:39:37.864273 | orchestrator | Version: 0.19.0 2026-02-03 02:39:37.864283 | orchestrator | GitCommit: de40ad0 2026-02-03 02:39:37.867704 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-03 02:39:37.877309 | orchestrator | + set -e 2026-02-03 02:39:37.877389 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 02:39:37.877405 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 02:39:37.877417 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 02:39:37.877428 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 02:39:37.877439 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 02:39:37.877450 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 02:39:37.877463 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 02:39:37.877474 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 02:39:37.877485 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 02:39:37.877497 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 02:39:37.877508 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 02:39:37.877519 | orchestrator | ++ export ARA=false 2026-02-03 02:39:37.877531 | orchestrator | ++ ARA=false 2026-02-03 02:39:37.877542 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 02:39:37.877553 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 02:39:37.877564 | orchestrator | ++ export TEMPEST=false 2026-02-03 02:39:37.877575 | orchestrator | ++ TEMPEST=false 2026-02-03 02:39:37.877586 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 02:39:37.877596 | orchestrator | ++ IS_ZUUL=true 2026-02-03 02:39:37.877607 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 02:39:37.877619 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 02:39:37.877630 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 02:39:37.877641 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 02:39:37.877651 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 02:39:37.877662 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 02:39:37.877673 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 02:39:37.877685 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 02:39:37.877696 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 02:39:37.877707 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 02:39:37.877718 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 02:39:37.877729 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 02:39:37.877740 | orchestrator | ++ INTERACTIVE=false 2026-02-03 02:39:37.877750 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 02:39:37.877766 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 02:39:37.877777 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-03 02:39:37.877788 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-03 02:39:37.885896 | orchestrator | + set -e 2026-02-03 02:39:37.885973 | orchestrator | + VERSION=9.5.0 2026-02-03 02:39:37.885990 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-03 02:39:37.896589 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-03 02:39:37.896674 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-03 02:39:37.901499 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-03 02:39:37.904909 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-03 02:39:37.912941 | orchestrator | /opt/configuration ~ 2026-02-03 02:39:37.913011 | orchestrator | + set -e 2026-02-03 02:39:37.913023 | orchestrator | + pushd /opt/configuration 2026-02-03 02:39:37.913034 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-03 02:39:37.914991 | orchestrator | + source /opt/venv/bin/activate 2026-02-03 02:39:37.916758 | orchestrator | ++ deactivate nondestructive 2026-02-03 02:39:37.916795 | orchestrator | ++ '[' -n '' ']' 2026-02-03 02:39:37.916808 | orchestrator | ++ '[' -n '' ']' 2026-02-03 02:39:37.916844 | orchestrator | ++ hash -r 2026-02-03 02:39:37.916958 | orchestrator | ++ '[' -n '' ']' 2026-02-03 02:39:37.916973 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-03 02:39:37.916982 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-03 02:39:37.916992 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-03 02:39:37.917095 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-03 02:39:37.917109 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-03 02:39:37.917353 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-03 02:39:37.917375 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-03 02:39:37.917392 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 02:39:37.917546 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 02:39:37.917567 | orchestrator | ++ export PATH 2026-02-03 02:39:37.917584 | orchestrator | ++ '[' -n '' ']' 2026-02-03 02:39:37.917601 | orchestrator | ++ '[' -z '' ']' 2026-02-03 02:39:37.917624 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-03 02:39:37.917647 | orchestrator | ++ PS1='(venv) ' 2026-02-03 02:39:37.917668 | orchestrator | ++ export PS1 2026-02-03 02:39:37.917685 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-03 02:39:37.917702 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-03 02:39:37.917777 | orchestrator | ++ hash -r 2026-02-03 02:39:37.917961 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-03 02:39:39.190161 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-03 02:39:39.191006 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-03 02:39:39.192808 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-03 02:39:39.194333 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-03 02:39:39.195605 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-03 02:39:39.206094 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-03 02:39:39.207448 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-03 02:39:39.208387 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-03 02:39:39.209537 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-03 02:39:39.249673 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-03 02:39:39.250849 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-03 02:39:39.252775 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-03 02:39:39.254130 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-03 02:39:39.258151 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-03 02:39:39.519142 | orchestrator | ++ which gilt 2026-02-03 02:39:39.523974 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-03 02:39:39.524049 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-03 02:39:39.804614 | orchestrator | osism.cfg-generics: 2026-02-03 02:39:39.991873 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-03 02:39:39.991996 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-03 02:39:39.993587 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-03 02:39:39.993657 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-03 02:39:40.782051 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-03 02:39:40.792554 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-03 02:39:41.149986 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-03 02:39:41.213541 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-03 02:39:41.213625 | orchestrator | + deactivate 2026-02-03 02:39:41.213635 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-03 02:39:41.213643 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 02:39:41.213649 | orchestrator | + export PATH 2026-02-03 02:39:41.213655 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-03 02:39:41.213662 | orchestrator | + '[' -n '' ']' 2026-02-03 02:39:41.213669 | orchestrator | + hash -r 2026-02-03 02:39:41.213684 | orchestrator | ~ 2026-02-03 02:39:41.213690 | orchestrator | + '[' -n '' ']' 2026-02-03 02:39:41.213696 | orchestrator | + unset VIRTUAL_ENV 2026-02-03 02:39:41.213702 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-03 02:39:41.213708 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-03 02:39:41.213713 | orchestrator | + unset -f deactivate 2026-02-03 02:39:41.213719 | orchestrator | + popd 2026-02-03 02:39:41.216433 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-03 02:39:41.216484 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-03 02:39:41.217555 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-03 02:39:41.291121 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 02:39:41.291207 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-03 02:39:41.292095 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-03 02:39:41.363736 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-03 02:39:41.364498 | orchestrator | ++ semver 2024.2 2025.1 2026-02-03 02:39:41.430537 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-03 02:39:41.430629 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-03 02:39:41.532685 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-03 02:39:41.532800 | orchestrator | + source /opt/venv/bin/activate 2026-02-03 02:39:41.532817 | orchestrator | ++ deactivate nondestructive 2026-02-03 02:39:41.532832 | orchestrator | ++ '[' -n '' ']' 2026-02-03 02:39:41.532846 | orchestrator | ++ '[' -n '' ']' 2026-02-03 02:39:41.532860 | orchestrator | ++ hash -r 2026-02-03 02:39:41.532874 | orchestrator | ++ '[' -n '' ']' 2026-02-03 02:39:41.532888 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-03 02:39:41.532902 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-03 02:39:41.532916 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-03 02:39:41.532931 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-03 02:39:41.532945 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-03 02:39:41.532960 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-03 02:39:41.532975 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-03 02:39:41.532990 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 02:39:41.533028 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 02:39:41.533043 | orchestrator | ++ export PATH 2026-02-03 02:39:41.533055 | orchestrator | ++ '[' -n '' ']' 2026-02-03 02:39:41.533068 | orchestrator | ++ '[' -z '' ']' 2026-02-03 02:39:41.533080 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-03 02:39:41.533093 | orchestrator | ++ PS1='(venv) ' 2026-02-03 02:39:41.533107 | orchestrator | ++ export PS1 2026-02-03 02:39:41.533120 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-03 02:39:41.533134 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-03 02:39:41.533147 | orchestrator | ++ hash -r 2026-02-03 02:39:41.533161 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-03 02:39:42.846667 | orchestrator | 2026-02-03 02:39:42.846752 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-03 02:39:42.846763 | orchestrator | 2026-02-03 02:39:42.846770 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-03 02:39:43.477172 | orchestrator | ok: [testbed-manager] 2026-02-03 02:39:43.477322 | orchestrator | 2026-02-03 02:39:43.477337 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-03 02:39:44.575546 | orchestrator | changed: [testbed-manager] 2026-02-03 02:39:44.575651 | orchestrator | 2026-02-03 02:39:44.575676 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-03 02:39:44.575724 | orchestrator | 2026-02-03 02:39:44.575734 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 02:39:47.094462 | orchestrator | ok: [testbed-manager] 2026-02-03 02:39:47.094539 | orchestrator | 2026-02-03 02:39:47.094548 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-03 02:39:47.151914 | orchestrator | ok: [testbed-manager] 2026-02-03 02:39:47.152001 | orchestrator | 2026-02-03 02:39:47.152012 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-03 02:39:47.689381 | orchestrator | changed: [testbed-manager] 2026-02-03 02:39:47.689470 | orchestrator | 2026-02-03 02:39:47.689484 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-03 02:39:47.742797 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:39:47.742881 | orchestrator | 2026-02-03 02:39:47.742895 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-03 02:39:48.117795 | orchestrator | changed: [testbed-manager] 2026-02-03 02:39:48.117872 | orchestrator | 2026-02-03 02:39:48.117883 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-03 02:39:48.474715 | orchestrator | ok: [testbed-manager] 2026-02-03 02:39:48.474821 | orchestrator | 2026-02-03 02:39:48.474841 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-03 02:39:48.627969 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:39:48.628070 | orchestrator | 2026-02-03 02:39:48.628087 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-03 02:39:48.628100 | orchestrator | 2026-02-03 02:39:48.628113 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 02:39:50.562639 | orchestrator | ok: [testbed-manager] 2026-02-03 02:39:50.562735 | orchestrator | 2026-02-03 02:39:50.562749 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-03 02:39:50.663068 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-03 02:39:50.663146 | orchestrator | 2026-02-03 02:39:50.663156 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-03 02:39:50.730806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-03 02:39:50.730903 | orchestrator | 2026-02-03 02:39:50.730919 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-03 02:39:51.931420 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-03 02:39:51.931526 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-03 02:39:51.931542 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-03 02:39:51.931555 | orchestrator | 2026-02-03 02:39:51.931571 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-03 02:39:53.948977 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-03 02:39:53.949091 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-03 02:39:53.949108 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-03 02:39:53.949121 | orchestrator | 2026-02-03 02:39:53.949133 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-03 02:39:54.732469 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-03 02:39:54.732545 | orchestrator | changed: [testbed-manager] 2026-02-03 02:39:54.732555 | orchestrator | 2026-02-03 02:39:54.732562 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-03 02:39:55.430402 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-03 02:39:55.430479 | orchestrator | changed: [testbed-manager] 2026-02-03 02:39:55.430488 | orchestrator | 2026-02-03 02:39:55.430494 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-03 02:39:55.489365 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:39:55.489450 | orchestrator | 2026-02-03 02:39:55.489463 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-03 02:39:55.880953 | orchestrator | ok: [testbed-manager] 2026-02-03 02:39:55.881084 | orchestrator | 2026-02-03 02:39:55.881104 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-03 02:39:55.963603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-03 02:39:55.963725 | orchestrator | 2026-02-03 02:39:55.963752 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-03 02:39:57.255640 | orchestrator | changed: [testbed-manager] 2026-02-03 02:39:57.255711 | orchestrator | 2026-02-03 02:39:57.255721 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-03 02:39:58.172266 | orchestrator | changed: [testbed-manager] 2026-02-03 02:39:58.172356 | orchestrator | 2026-02-03 02:39:58.172366 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-03 02:40:13.489393 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:13.489485 | orchestrator | 2026-02-03 02:40:13.489495 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-03 02:40:13.540027 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:40:13.540110 | orchestrator | 2026-02-03 02:40:13.540142 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-03 02:40:13.540149 | orchestrator | 2026-02-03 02:40:13.540154 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 02:40:15.623964 | orchestrator | ok: [testbed-manager] 2026-02-03 02:40:15.624048 | orchestrator | 2026-02-03 02:40:15.624059 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-03 02:40:15.758785 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-03 02:40:15.758901 | orchestrator | 2026-02-03 02:40:15.758926 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-03 02:40:15.837815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 02:40:15.837889 | orchestrator | 2026-02-03 02:40:15.837903 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-03 02:40:18.848864 | orchestrator | ok: [testbed-manager] 2026-02-03 02:40:18.848981 | orchestrator | 2026-02-03 02:40:18.848999 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-03 02:40:18.905988 | orchestrator | ok: [testbed-manager] 2026-02-03 02:40:18.906136 | orchestrator | 2026-02-03 02:40:18.906154 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-03 02:40:19.059142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-03 02:40:19.059253 | orchestrator | 2026-02-03 02:40:19.059263 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-03 02:40:22.162393 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-03 02:40:22.162531 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-03 02:40:22.162558 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-03 02:40:22.162579 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-03 02:40:22.162597 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-03 02:40:22.162616 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-03 02:40:22.162634 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-03 02:40:22.162653 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-03 02:40:22.162671 | orchestrator | 2026-02-03 02:40:22.162690 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-03 02:40:22.855546 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:22.855631 | orchestrator | 2026-02-03 02:40:22.855645 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-03 02:40:23.584148 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:23.584339 | orchestrator | 2026-02-03 02:40:23.584360 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-03 02:40:23.677927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-03 02:40:23.678119 | orchestrator | 2026-02-03 02:40:23.678135 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-03 02:40:25.015843 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-03 02:40:25.015951 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-03 02:40:25.015960 | orchestrator | 2026-02-03 02:40:25.015966 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-03 02:40:25.737891 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:25.738100 | orchestrator | 2026-02-03 02:40:25.738135 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-03 02:40:25.799775 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:40:25.799897 | orchestrator | 2026-02-03 02:40:25.799927 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-03 02:40:25.891493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-03 02:40:25.891588 | orchestrator | 2026-02-03 02:40:25.891602 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-03 02:40:26.574382 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:26.574512 | orchestrator | 2026-02-03 02:40:26.574532 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-03 02:40:26.647885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-03 02:40:26.647984 | orchestrator | 2026-02-03 02:40:26.647996 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-03 02:40:28.189966 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-03 02:40:28.190093 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-03 02:40:28.190105 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:28.190114 | orchestrator | 2026-02-03 02:40:28.190122 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-03 02:40:28.868954 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:28.869088 | orchestrator | 2026-02-03 02:40:28.869118 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-03 02:40:28.912090 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:40:28.912193 | orchestrator | 2026-02-03 02:40:28.912210 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-03 02:40:29.022815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-03 02:40:29.022886 | orchestrator | 2026-02-03 02:40:29.022894 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-03 02:40:29.636416 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:29.636532 | orchestrator | 2026-02-03 02:40:29.636551 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-03 02:40:30.099103 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:30.099202 | orchestrator | 2026-02-03 02:40:30.099280 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-03 02:40:31.483639 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-03 02:40:31.483761 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-03 02:40:31.483784 | orchestrator | 2026-02-03 02:40:31.483801 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-03 02:40:32.215679 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:32.215747 | orchestrator | 2026-02-03 02:40:32.215754 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-03 02:40:32.683066 | orchestrator | ok: [testbed-manager] 2026-02-03 02:40:32.683151 | orchestrator | 2026-02-03 02:40:32.683163 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-03 02:40:33.096772 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:33.096851 | orchestrator | 2026-02-03 02:40:33.096860 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-03 02:40:33.137408 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:40:33.137487 | orchestrator | 2026-02-03 02:40:33.137497 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-03 02:40:33.207685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-03 02:40:33.208672 | orchestrator | 2026-02-03 02:40:33.208713 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-03 02:40:33.260486 | orchestrator | ok: [testbed-manager] 2026-02-03 02:40:33.260567 | orchestrator | 2026-02-03 02:40:33.260579 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-03 02:40:35.614780 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-03 02:40:35.614876 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-03 02:40:35.614891 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-03 02:40:35.614900 | orchestrator | 2026-02-03 02:40:35.614910 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-03 02:40:36.384766 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:36.384840 | orchestrator | 2026-02-03 02:40:36.384850 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-03 02:40:37.167208 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:37.167402 | orchestrator | 2026-02-03 02:40:37.167432 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-03 02:40:37.970490 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:37.970636 | orchestrator | 2026-02-03 02:40:37.970655 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-03 02:40:38.044861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-03 02:40:38.044963 | orchestrator | 2026-02-03 02:40:38.044983 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-03 02:40:38.094509 | orchestrator | ok: [testbed-manager] 2026-02-03 02:40:38.094617 | orchestrator | 2026-02-03 02:40:38.094639 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-03 02:40:38.906439 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-03 02:40:38.906531 | orchestrator | 2026-02-03 02:40:38.906540 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-03 02:40:38.998685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-03 02:40:38.998751 | orchestrator | 2026-02-03 02:40:38.998758 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-03 02:40:39.788635 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:39.788724 | orchestrator | 2026-02-03 02:40:39.788732 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-03 02:40:40.466950 | orchestrator | ok: [testbed-manager] 2026-02-03 02:40:40.467039 | orchestrator | 2026-02-03 02:40:40.467052 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-03 02:40:40.532086 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:40:40.532203 | orchestrator | 2026-02-03 02:40:40.532282 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-03 02:40:40.600364 | orchestrator | ok: [testbed-manager] 2026-02-03 02:40:40.600434 | orchestrator | 2026-02-03 02:40:40.600440 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-03 02:40:41.503903 | orchestrator | changed: [testbed-manager] 2026-02-03 02:40:41.504040 | orchestrator | 2026-02-03 02:40:41.504067 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-03 02:41:54.409026 | orchestrator | changed: [testbed-manager] 2026-02-03 02:41:54.409118 | orchestrator | 2026-02-03 02:41:54.409127 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-03 02:41:55.475785 | orchestrator | ok: [testbed-manager] 2026-02-03 02:41:55.475878 | orchestrator | 2026-02-03 02:41:55.475888 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-03 02:41:55.536397 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:41:55.536501 | orchestrator | 2026-02-03 02:41:55.536519 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-03 02:42:01.941148 | orchestrator | changed: [testbed-manager] 2026-02-03 02:42:01.941303 | orchestrator | 2026-02-03 02:42:01.941320 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-03 02:42:02.000102 | orchestrator | ok: [testbed-manager] 2026-02-03 02:42:02.000272 | orchestrator | 2026-02-03 02:42:02.000299 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-03 02:42:02.000318 | orchestrator | 2026-02-03 02:42:02.000337 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-03 02:42:02.179354 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:42:02.179436 | orchestrator | 2026-02-03 02:42:02.179446 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-03 02:43:02.241925 | orchestrator | Pausing for 60 seconds 2026-02-03 02:43:02.242099 | orchestrator | changed: [testbed-manager] 2026-02-03 02:43:02.242121 | orchestrator | 2026-02-03 02:43:02.242138 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-03 02:43:04.908734 | orchestrator | changed: [testbed-manager] 2026-02-03 02:43:04.908836 | orchestrator | 2026-02-03 02:43:04.908849 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-03 02:44:07.074337 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-03 02:44:07.074441 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-03 02:44:07.074472 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-03 02:44:07.074482 | orchestrator | changed: [testbed-manager] 2026-02-03 02:44:07.074492 | orchestrator | 2026-02-03 02:44:07.074502 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-03 02:44:18.074639 | orchestrator | changed: [testbed-manager] 2026-02-03 02:44:18.074750 | orchestrator | 2026-02-03 02:44:18.074764 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-03 02:44:18.162307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-03 02:44:18.162420 | orchestrator | 2026-02-03 02:44:18.162430 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-03 02:44:18.162437 | orchestrator | 2026-02-03 02:44:18.162443 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-03 02:44:18.222443 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:44:18.222573 | orchestrator | 2026-02-03 02:44:18.222595 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-03 02:44:18.300557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-03 02:44:18.300722 | orchestrator | 2026-02-03 02:44:18.300748 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-03 02:44:19.091809 | orchestrator | changed: [testbed-manager] 2026-02-03 02:44:19.091933 | orchestrator | 2026-02-03 02:44:19.091943 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-03 02:44:22.527629 | orchestrator | ok: [testbed-manager] 2026-02-03 02:44:22.527736 | orchestrator | 2026-02-03 02:44:22.527750 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-03 02:44:22.600047 | orchestrator | ok: [testbed-manager] => { 2026-02-03 02:44:22.600154 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-03 02:44:22.600172 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-03 02:44:22.600187 | orchestrator | "Checking running containers against expected versions...", 2026-02-03 02:44:22.600202 | orchestrator | "", 2026-02-03 02:44:22.600267 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-03 02:44:22.600283 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-03 02:44:22.600297 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.600310 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-03 02:44:22.600324 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.600338 | orchestrator | "", 2026-02-03 02:44:22.600351 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-03 02:44:22.600395 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-03 02:44:22.600408 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.600421 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-03 02:44:22.600434 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.600446 | orchestrator | "", 2026-02-03 02:44:22.600459 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-03 02:44:22.600471 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-03 02:44:22.600483 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.600496 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-03 02:44:22.600508 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.600521 | orchestrator | "", 2026-02-03 02:44:22.600534 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-03 02:44:22.600546 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-03 02:44:22.600559 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.600572 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-03 02:44:22.600586 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.600599 | orchestrator | "", 2026-02-03 02:44:22.600616 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-03 02:44:22.600630 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-03 02:44:22.600643 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.600656 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-03 02:44:22.600670 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.600683 | orchestrator | "", 2026-02-03 02:44:22.600697 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-03 02:44:22.600711 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.600724 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.600738 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.600750 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.600762 | orchestrator | "", 2026-02-03 02:44:22.600774 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-03 02:44:22.600786 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-03 02:44:22.600799 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.600812 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-03 02:44:22.600825 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.600837 | orchestrator | "", 2026-02-03 02:44:22.600850 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-03 02:44:22.600865 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-03 02:44:22.600879 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.600892 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-03 02:44:22.600905 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.600918 | orchestrator | "", 2026-02-03 02:44:22.600931 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-03 02:44:22.600944 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-03 02:44:22.600957 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.600969 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-03 02:44:22.600982 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.600994 | orchestrator | "", 2026-02-03 02:44:22.601007 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-03 02:44:22.601019 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-03 02:44:22.601032 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.601045 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-03 02:44:22.601058 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.601070 | orchestrator | "", 2026-02-03 02:44:22.601083 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-03 02:44:22.601109 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601121 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.601133 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601146 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.601158 | orchestrator | "", 2026-02-03 02:44:22.601171 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-03 02:44:22.601183 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601196 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.601208 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601250 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.601266 | orchestrator | "", 2026-02-03 02:44:22.601278 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-03 02:44:22.601290 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601302 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.601316 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601328 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.601341 | orchestrator | "", 2026-02-03 02:44:22.601354 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-03 02:44:22.601368 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601381 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.601394 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601433 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.601447 | orchestrator | "", 2026-02-03 02:44:22.601460 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-03 02:44:22.601473 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601500 | orchestrator | " Enabled: true", 2026-02-03 02:44:22.601513 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-03 02:44:22.601525 | orchestrator | " Status: ✅ MATCH", 2026-02-03 02:44:22.601538 | orchestrator | "", 2026-02-03 02:44:22.601551 | orchestrator | "=== Summary ===", 2026-02-03 02:44:22.601564 | orchestrator | "Errors (version mismatches): 0", 2026-02-03 02:44:22.601582 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-03 02:44:22.601595 | orchestrator | "", 2026-02-03 02:44:22.601608 | orchestrator | "✅ All running containers match expected versions!" 2026-02-03 02:44:22.601621 | orchestrator | ] 2026-02-03 02:44:22.601635 | orchestrator | } 2026-02-03 02:44:22.601648 | orchestrator | 2026-02-03 02:44:22.601661 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-03 02:44:22.651837 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:44:22.651942 | orchestrator | 2026-02-03 02:44:22.651961 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:44:22.651977 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-03 02:44:22.651991 | orchestrator | 2026-02-03 02:44:22.761912 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-03 02:44:22.762010 | orchestrator | + deactivate 2026-02-03 02:44:22.762082 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-03 02:44:22.762097 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 02:44:22.762109 | orchestrator | + export PATH 2026-02-03 02:44:22.762120 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-03 02:44:22.762132 | orchestrator | + '[' -n '' ']' 2026-02-03 02:44:22.762144 | orchestrator | + hash -r 2026-02-03 02:44:22.762155 | orchestrator | + '[' -n '' ']' 2026-02-03 02:44:22.762166 | orchestrator | + unset VIRTUAL_ENV 2026-02-03 02:44:22.762177 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-03 02:44:22.762188 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-03 02:44:22.762199 | orchestrator | + unset -f deactivate 2026-02-03 02:44:22.762211 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-03 02:44:22.768666 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-03 02:44:22.768746 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-03 02:44:22.768786 | orchestrator | + local max_attempts=60 2026-02-03 02:44:22.768805 | orchestrator | + local name=ceph-ansible 2026-02-03 02:44:22.768825 | orchestrator | + local attempt_num=1 2026-02-03 02:44:22.769968 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 02:44:22.809352 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-03 02:44:22.809434 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-03 02:44:22.809446 | orchestrator | + local max_attempts=60 2026-02-03 02:44:22.809457 | orchestrator | + local name=kolla-ansible 2026-02-03 02:44:22.809466 | orchestrator | + local attempt_num=1 2026-02-03 02:44:22.810378 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-03 02:44:22.845063 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-03 02:44:22.845127 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-03 02:44:22.845133 | orchestrator | + local max_attempts=60 2026-02-03 02:44:22.845138 | orchestrator | + local name=osism-ansible 2026-02-03 02:44:22.845142 | orchestrator | + local attempt_num=1 2026-02-03 02:44:22.845400 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-03 02:44:22.881980 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-03 02:44:22.882156 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-03 02:44:22.882186 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-03 02:44:23.549142 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-03 02:44:23.749669 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-03 02:44:23.749752 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-03 02:44:23.749762 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-03 02:44:23.749770 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-03 02:44:23.749777 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-03 02:44:23.749799 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-03 02:44:23.749806 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-03 02:44:23.749813 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-03 02:44:23.749819 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-03 02:44:23.749825 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-03 02:44:23.749832 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-03 02:44:23.749838 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-03 02:44:23.749844 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-03 02:44:23.749888 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-03 02:44:23.749895 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-03 02:44:23.749902 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-03 02:44:23.756198 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-03 02:44:23.805963 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 02:44:23.806104 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-03 02:44:23.809680 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-03 02:44:36.118983 | orchestrator | 2026-02-03 02:44:36 | INFO  | Task 39f82937-ec6b-44e7-a016-56a156810ff8 (resolvconf) was prepared for execution. 2026-02-03 02:44:36.119095 | orchestrator | 2026-02-03 02:44:36 | INFO  | It takes a moment until task 39f82937-ec6b-44e7-a016-56a156810ff8 (resolvconf) has been started and output is visible here. 2026-02-03 02:44:50.761619 | orchestrator | 2026-02-03 02:44:50.761748 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-03 02:44:50.761773 | orchestrator | 2026-02-03 02:44:50.761790 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 02:44:50.761804 | orchestrator | Tuesday 03 February 2026 02:44:40 +0000 (0:00:00.157) 0:00:00.157 ****** 2026-02-03 02:44:50.761819 | orchestrator | ok: [testbed-manager] 2026-02-03 02:44:50.761835 | orchestrator | 2026-02-03 02:44:50.761848 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-03 02:44:50.761866 | orchestrator | Tuesday 03 February 2026 02:44:44 +0000 (0:00:04.057) 0:00:04.215 ****** 2026-02-03 02:44:50.761881 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:44:50.761897 | orchestrator | 2026-02-03 02:44:50.761913 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-03 02:44:50.761928 | orchestrator | Tuesday 03 February 2026 02:44:44 +0000 (0:00:00.057) 0:00:04.272 ****** 2026-02-03 02:44:50.761944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-03 02:44:50.761960 | orchestrator | 2026-02-03 02:44:50.761975 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-03 02:44:50.761990 | orchestrator | Tuesday 03 February 2026 02:44:44 +0000 (0:00:00.088) 0:00:04.360 ****** 2026-02-03 02:44:50.762088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 02:44:50.762112 | orchestrator | 2026-02-03 02:44:50.762207 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-03 02:44:50.762226 | orchestrator | Tuesday 03 February 2026 02:44:44 +0000 (0:00:00.082) 0:00:04.443 ****** 2026-02-03 02:44:50.762266 | orchestrator | ok: [testbed-manager] 2026-02-03 02:44:50.762284 | orchestrator | 2026-02-03 02:44:50.762298 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-03 02:44:50.762313 | orchestrator | Tuesday 03 February 2026 02:44:45 +0000 (0:00:01.218) 0:00:05.662 ****** 2026-02-03 02:44:50.762329 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:44:50.762344 | orchestrator | 2026-02-03 02:44:50.762357 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-03 02:44:50.762372 | orchestrator | Tuesday 03 February 2026 02:44:45 +0000 (0:00:00.069) 0:00:05.731 ****** 2026-02-03 02:44:50.762417 | orchestrator | ok: [testbed-manager] 2026-02-03 02:44:50.762451 | orchestrator | 2026-02-03 02:44:50.762475 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-03 02:44:50.762490 | orchestrator | Tuesday 03 February 2026 02:44:46 +0000 (0:00:00.556) 0:00:06.288 ****** 2026-02-03 02:44:50.762503 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:44:50.762516 | orchestrator | 2026-02-03 02:44:50.762530 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-03 02:44:50.762545 | orchestrator | Tuesday 03 February 2026 02:44:46 +0000 (0:00:00.081) 0:00:06.370 ****** 2026-02-03 02:44:50.762558 | orchestrator | changed: [testbed-manager] 2026-02-03 02:44:50.762572 | orchestrator | 2026-02-03 02:44:50.762585 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-03 02:44:50.762599 | orchestrator | Tuesday 03 February 2026 02:44:47 +0000 (0:00:00.576) 0:00:06.946 ****** 2026-02-03 02:44:50.762612 | orchestrator | changed: [testbed-manager] 2026-02-03 02:44:50.762625 | orchestrator | 2026-02-03 02:44:50.762638 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-03 02:44:50.762651 | orchestrator | Tuesday 03 February 2026 02:44:48 +0000 (0:00:01.171) 0:00:08.117 ****** 2026-02-03 02:44:50.762665 | orchestrator | ok: [testbed-manager] 2026-02-03 02:44:50.762678 | orchestrator | 2026-02-03 02:44:50.762692 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-03 02:44:50.762705 | orchestrator | Tuesday 03 February 2026 02:44:49 +0000 (0:00:00.980) 0:00:09.097 ****** 2026-02-03 02:44:50.762718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-03 02:44:50.762731 | orchestrator | 2026-02-03 02:44:50.762744 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-03 02:44:50.762758 | orchestrator | Tuesday 03 February 2026 02:44:49 +0000 (0:00:00.077) 0:00:09.175 ****** 2026-02-03 02:44:50.762771 | orchestrator | changed: [testbed-manager] 2026-02-03 02:44:50.762784 | orchestrator | 2026-02-03 02:44:50.762797 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:44:50.762811 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 02:44:50.762824 | orchestrator | 2026-02-03 02:44:50.762837 | orchestrator | 2026-02-03 02:44:50.762851 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 02:44:50.762865 | orchestrator | Tuesday 03 February 2026 02:44:50 +0000 (0:00:01.167) 0:00:10.343 ****** 2026-02-03 02:44:50.762878 | orchestrator | =============================================================================== 2026-02-03 02:44:50.762891 | orchestrator | Gathering Facts --------------------------------------------------------- 4.06s 2026-02-03 02:44:50.762904 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.22s 2026-02-03 02:44:50.762916 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.17s 2026-02-03 02:44:50.762930 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2026-02-03 02:44:50.762943 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-02-03 02:44:50.762957 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2026-02-03 02:44:50.762993 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-02-03 02:44:50.763006 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-03 02:44:50.763019 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-02-03 02:44:50.763033 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-03 02:44:50.763046 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-03 02:44:50.763060 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-03 02:44:50.763082 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-02-03 02:44:51.087043 | orchestrator | + osism apply sshconfig 2026-02-03 02:45:03.219427 | orchestrator | 2026-02-03 02:45:03 | INFO  | Task 2686d84e-33c4-4b7b-91a7-2be467362fdd (sshconfig) was prepared for execution. 2026-02-03 02:45:03.219567 | orchestrator | 2026-02-03 02:45:03 | INFO  | It takes a moment until task 2686d84e-33c4-4b7b-91a7-2be467362fdd (sshconfig) has been started and output is visible here. 2026-02-03 02:45:15.458170 | orchestrator | 2026-02-03 02:45:15.458261 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-03 02:45:15.458287 | orchestrator | 2026-02-03 02:45:15.458308 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-03 02:45:15.458313 | orchestrator | Tuesday 03 February 2026 02:45:07 +0000 (0:00:00.160) 0:00:00.160 ****** 2026-02-03 02:45:15.458317 | orchestrator | ok: [testbed-manager] 2026-02-03 02:45:15.458322 | orchestrator | 2026-02-03 02:45:15.458327 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-03 02:45:15.458331 | orchestrator | Tuesday 03 February 2026 02:45:08 +0000 (0:00:00.591) 0:00:00.751 ****** 2026-02-03 02:45:15.458335 | orchestrator | changed: [testbed-manager] 2026-02-03 02:45:15.458341 | orchestrator | 2026-02-03 02:45:15.458345 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-03 02:45:15.458349 | orchestrator | Tuesday 03 February 2026 02:45:08 +0000 (0:00:00.516) 0:00:01.267 ****** 2026-02-03 02:45:15.458353 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-03 02:45:15.458357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-03 02:45:15.458362 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-03 02:45:15.458366 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-03 02:45:15.458370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-03 02:45:15.458374 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-03 02:45:15.458378 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-03 02:45:15.458381 | orchestrator | 2026-02-03 02:45:15.458385 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-03 02:45:15.458389 | orchestrator | Tuesday 03 February 2026 02:45:14 +0000 (0:00:05.979) 0:00:07.246 ****** 2026-02-03 02:45:15.458393 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:45:15.458396 | orchestrator | 2026-02-03 02:45:15.458400 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-03 02:45:15.458404 | orchestrator | Tuesday 03 February 2026 02:45:14 +0000 (0:00:00.074) 0:00:07.320 ****** 2026-02-03 02:45:15.458408 | orchestrator | changed: [testbed-manager] 2026-02-03 02:45:15.458412 | orchestrator | 2026-02-03 02:45:15.458415 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:45:15.458420 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 02:45:15.458425 | orchestrator | 2026-02-03 02:45:15.458429 | orchestrator | 2026-02-03 02:45:15.458433 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 02:45:15.458437 | orchestrator | Tuesday 03 February 2026 02:45:15 +0000 (0:00:00.560) 0:00:07.881 ****** 2026-02-03 02:45:15.458441 | orchestrator | =============================================================================== 2026-02-03 02:45:15.458445 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.98s 2026-02-03 02:45:15.458449 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2026-02-03 02:45:15.458453 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-02-03 02:45:15.458456 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-02-03 02:45:15.458474 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-02-03 02:45:15.789608 | orchestrator | + osism apply known-hosts 2026-02-03 02:45:27.967943 | orchestrator | 2026-02-03 02:45:27 | INFO  | Task 0e0c31a7-f704-4634-8682-39f93757280f (known-hosts) was prepared for execution. 2026-02-03 02:45:27.968055 | orchestrator | 2026-02-03 02:45:27 | INFO  | It takes a moment until task 0e0c31a7-f704-4634-8682-39f93757280f (known-hosts) has been started and output is visible here. 2026-02-03 02:45:45.380411 | orchestrator | 2026-02-03 02:45:45.380509 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-03 02:45:45.380521 | orchestrator | 2026-02-03 02:45:45.380528 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-03 02:45:45.380536 | orchestrator | Tuesday 03 February 2026 02:45:32 +0000 (0:00:00.178) 0:00:00.178 ****** 2026-02-03 02:45:45.380543 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-03 02:45:45.380550 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-03 02:45:45.380556 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-03 02:45:45.380562 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-03 02:45:45.380568 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-03 02:45:45.380574 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-03 02:45:45.380581 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-03 02:45:45.380587 | orchestrator | 2026-02-03 02:45:45.380593 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-03 02:45:45.380601 | orchestrator | Tuesday 03 February 2026 02:45:38 +0000 (0:00:06.164) 0:00:06.342 ****** 2026-02-03 02:45:45.380609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-03 02:45:45.380619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-03 02:45:45.380625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-03 02:45:45.380632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-03 02:45:45.380638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-03 02:45:45.380650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-03 02:45:45.380655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-03 02:45:45.380659 | orchestrator | 2026-02-03 02:45:45.380663 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:45.380667 | orchestrator | Tuesday 03 February 2026 02:45:38 +0000 (0:00:00.164) 0:00:06.506 ****** 2026-02-03 02:45:45.380671 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDw/zW4afxIhXuXZ6mBMdAuRD0HoWz67MpkUZ+kdKTx5nSCnYELIKIyRJ4tkxig2B60wffxYW4Xeux4zpU64bnU=) 2026-02-03 02:45:45.380680 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr7rBH42tYLUBcGM2UPVU5kFtI1w3raDXSD/p5y75aCjkIuWy9PQG34EDRNYMfXxmibflX4XdoDv1dY/pnOQM81CIRo1Vh8EG3R2X+l/LZqnKEMAc60EK00ETeG3GsyrM3n+L8fsyi/CuLT2cj6trjoxEvvBroYZWpKtN2VS+ClLe9C6U8OlJt0ydCBrLn73OIoRfoUod/7mArX2DTYp0CcUSxXPO0jRHi++8IGL6dH808B9VuVAq0TrnihLEJ0oSNpc8HbMmBmttlww41YC04x/R2giYgKKnH8CGEMbbjypU22ByZuOYyOi5lUBpgwV6Y0dyeXa1cMnktSaobuMO6rlasFAlL0AA0GajlpMDbjELBE58dv2tWB8IvsxUE05XPZ4zoAM8JdIiThxQIG8POLNscyOgWRWat9CtjfMMiQGumaJVfPHJlzeIPHODWeztVtuEy2UWR60Zihq+eeg3knJeN6GRnf1cdIw3PO6e4lAdQ8mKHB7y7UBPM36mU9Z0=) 2026-02-03 02:45:45.380714 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJmA1lQeSH0IdK//Ua08xfRNWP7SAPx9/QJw0SqO7GlG) 2026-02-03 02:45:45.380720 | orchestrator | 2026-02-03 02:45:45.380724 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:45.380728 | orchestrator | Tuesday 03 February 2026 02:45:39 +0000 (0:00:01.253) 0:00:07.760 ****** 2026-02-03 02:45:45.380744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4ccoD9BFd9lfoKFZgeMXwBKIQBEQMAw0ACv8Cn113Mad/EaKxy8z8+MHRm7TZxt7YIoiqv3a5hpNW9X0qv+dd8EGM+sGLJWOh27UMmmjPdMhjJO1uaOXmG4X2F1NahwTCmaFMbXnFCaPOoS2BhA9PQpNf47ustH7UN3pgBM/bghO/BchB1WAVNgfOGj26Ex4Q7NyshudcY+IOKfunouSJku0vQs7aXiqn67YDDpXPNk5muSLDRo3qRbzusmIUufUNed1pbofdE1MfrpAfGaf6KWBiSZYx5VwgePvXvJ+muwPccwkyvuxgZv9oKnLIibkXnZfAgCiQqjIBWLiozvknB6H+khqFrtikiUPt+RZN7rcDxj9ouyzYQbEA/meyybtJhpYkWciTnamKBF9pxaFhLeAHVgpsRZUx+qv3Rz5qm0WyfT7m0sn2C4E2HxrLp/wC9vBrWHKVdlSz8Am8EBqr3drnGQxCXHZn59EeD4OR4VTIKn9h2omVxyb3mEjqtiU=) 2026-02-03 02:45:45.380749 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBICUOJu5Xl50ZN8k5ChGEiuyPzKJis/MVgPW+cgE3gbVbagBkv8prg4inwZAGC0QE4dov05RLuv2ipX/VvMDx3g=) 2026-02-03 02:45:45.380753 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAKsSYwyuRUnz5XaGWNwL2+mqY+cf+IfXarofGDVWg2Q) 2026-02-03 02:45:45.380756 | orchestrator | 2026-02-03 02:45:45.380760 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:45.380764 | orchestrator | Tuesday 03 February 2026 02:45:40 +0000 (0:00:01.118) 0:00:08.879 ****** 2026-02-03 02:45:45.380767 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLCRAmyoiftcobnxdKacxC36Rps7DNFCk8DfSPCkYQwZj+Ec8urRGmadtilQWZxqS15PJhiYOlMkTGC3sZaNDFs=) 2026-02-03 02:45:45.380771 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDy1BbqV8geTYEkZ5gskBM+Utkxv47ihc0sVgVm2oDqXlAu2IVmCG2tWkiBlwuzNybZtLVaYMzh3DAPhIgEAR7xWXa3k07l0M9aKkL0fcxqmM9lUwP93Cf+4zLQ3Kx6lOIE3uabnfW87OZtAX5dlV7UhnyoUqXzpCtotwq96p0XhTiEC+SRMGhkCYj402ZIyQnYqpLysG0CpSGTYBrzkelFnhPbT1k491cLCzCG0GfIoe/WRymXszBlHLJbxX2olZtYDk4bpKC9DEiovRHtlQHMQx+WNZrX6i1i3Om22Dlo2BXIY9RDxVzbDm79JOVwt67sy89gbrhdR6RUW3ym8sBPbzJPq2/tnsIg751l+5mBnfezedfI3php6P7q0hO7VY3f4mP6xK9qhnicmXEsfSXZfJ7PfzsXp5wSWNIeahiUkOCJ21RZfq7hgOUaZafjVBNQcyAC6upzM/UNnUV0L1hmV6YAlTbT6AwR5I7YoqOTX86Q7c5pOse0KbD1f6Usq3c=) 2026-02-03 02:45:45.380776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ1qRHslHs6+86uU3/hF5b4gazxdoCvT2Xxd0IRDgPEu) 2026-02-03 02:45:45.380779 | orchestrator | 2026-02-03 02:45:45.380783 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:45.380787 | orchestrator | Tuesday 03 February 2026 02:45:42 +0000 (0:00:01.131) 0:00:10.010 ****** 2026-02-03 02:45:45.380791 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQsepxt7zrtoYmFseSrnUAR9iwI1BQYDaRGr6AN3MzCX3/z4KFSw4f9iH/V20QeXmvT9Q/p/0t/FiqAEQzBXLLahVPPkrE8eqfrCaQZC6fooqzZibyWse5pMErZVgE6iNx0SgPUbMexA0PjLmF/yrS9IYS4b3fRM6/Zs/ZnHaYw0FQM6GBOxT60qXR4rnKXB0wenOVHOGyP7vWAdqDOfH4THPblo9qhsZASRQFtM+8pefO8Pj9GVh+GTvHZU/JQ06JM+4K7I/wWC8zNpPXhBcJsksnMoSKy2eG0AwavO0NpmTu7pkPhJSAv5qXN3TjKIUnYRhLufYVE6/NBXiUl1y1R5XdcleI4QHzek4wEH4IrPfr6hHEm62J5Oep9JzP4O12Xh0eTcKIMBFY1uyZ+gdjHzABzV3kIibui55hpk3FWcMMJ0hUQ+CeDp+Pkcm+UQ/HCCqX7SC2WZ8D/JEZMwmVKxeFP1sN0wD54lLB+A57Be+j5kHATs5k7UjijvEhl6U=) 2026-02-03 02:45:45.380799 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE3HrclHAMTR7d8piNQtwNRf5ilRYhTZ/VimXSLXIJ7kFi1e2ob3L7I2HbwIzSNXMYhuQ+m+qIo3mBCYVHn1mHQ=) 2026-02-03 02:45:45.380803 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBQvsy6acJpznKPv3rAsa0Rh5ZyrizGtNv9C6Rf4oQWL) 2026-02-03 02:45:45.380806 | orchestrator | 2026-02-03 02:45:45.380810 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:45.380814 | orchestrator | Tuesday 03 February 2026 02:45:43 +0000 (0:00:01.103) 0:00:11.114 ****** 2026-02-03 02:45:45.380859 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAczUhjknr8xhi/XeUponfBc2ThzCdssoQBUa1P9vouQYwlP+tU/6STWTChhfHwFWxR0kjseRdnE53XdO04BNCU=) 2026-02-03 02:45:45.380863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTbYZVfnCUvVveUpSF2BRHLZ7uWXXFWo0eeh2tpBh+YyRFxG/XFDnsEEiEwtvz55f3DnDXmPdsMCsF8D3wAl6LFu4CjFI1V8JSERTTonYHmLTt2tlHB1sgfkat8ruaQufGZo41sjkL8yUczgrf1lYMA0jq7Mz4+rl6EgC2rKWQ+K9DlkKEfIPuv35sfSc6FodkdeAoRxAlxOX3T4CODqmJRjFC3v0vHP/1w2uHDLNklyXF7ZnskXpcQxL/ex7pMpbquiclKDvpPXJRoEzoyOfByle16MIpIece4hgh+ncPJ6we3X7g+GeSViPl+1THloOz/t4iF4KMaISWaCx2nF6KMBlUAhRxihCXrQoZge6TTmepU+bFLA9I8h9BiSb7HOCE2uT4nU4HYMtvvnH37ymEZIBMrCwQR1Who2ZOf/5ggadDMKWn/r3ptYRFvrAKGvSqdUpwuyxgESeYkN8x72iMnSMEnFC1VJuxaXXku1ukDl8lbGXzNJKO49YbMLhIles=) 2026-02-03 02:45:45.380867 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGNHetrHNLN70fuN3ptnUj9QAJWsZYcGcMqzgkJYDoIA) 2026-02-03 02:45:45.380871 | orchestrator | 2026-02-03 02:45:45.380874 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:45.380878 | orchestrator | Tuesday 03 February 2026 02:45:44 +0000 (0:00:01.110) 0:00:12.224 ****** 2026-02-03 02:45:45.380885 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO0prrY3NW4G+6vwFRwTgG7dz521VpT5pgpaAxXiARrotIBpdPpAXSt98TGlwdciS7U7kuVsZ6hXmaG3bsvPP9w=) 2026-02-03 02:45:56.534855 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6Robt19oPZbyEp9QGZYfzGiZ8DgA1AAA3ssObPyWvN) 2026-02-03 02:45:56.534949 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCEQsxUNZnw0f0KT8uIAC0yRrLGROpBTq8LwEpORdxbTf9X80ZWlSXM1onYEURyvGs3a8tDUyDMQDjo6FM2fCMQMnVcAxWYyrpROZlI+ubF/1ensD0UcTFHGG+2wgVeQhQl0J3KSUFGAsdqLBErLXbKC4aw53p6ig5HYGzvstkSqS2nYU92O45QLMDpxIWIp+qNYBLYL0TCm+GwQTQugUCtzVPJHN58gYLpmYhJq+/cOPuLQc3Ovfwx7RwiAGU+MTympkZ7KSisTDTtCo3w3WsQnuGH2iW5CCIMHsznrqs7Ts7knQf0AZbVHRKzcMQrV9cQclYQrumIx+za+ZobD5DH7y9MLQwCfInBfqa+Q6yPnTHUBXVt41G3NR88B37xxGgVkTs/UUSbIkkzikREwwXmxFLVPXE8wRPiD8c3PACLqHDdCc8DPM1En/AhdlyxQlygzLk/px61WyU0S8obIDHEduz+GcSlyw2z83F9gCoONrVkNtI+bnISDbCHzr9YVuU=) 2026-02-03 02:45:56.534962 | orchestrator | 2026-02-03 02:45:56.534970 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:56.534978 | orchestrator | Tuesday 03 February 2026 02:45:45 +0000 (0:00:01.150) 0:00:13.374 ****** 2026-02-03 02:45:56.534985 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDyJmIcGRZHHMrDT48BBlka8/HvU6ZuUIy42yGcxYPRwMHhbqRqqv+wGl+Zm46/2mAtZvfZ7LIkaBRjEl/xcjQuqJhJ6tALgrgpV0om1PhBEGHvz32jfC+O3C+6+aunQwz9eYnvCmT0xL0k0wDuzsGYB0dPn1UYOvcEy64oiekzICjzPBaWyj8Z78vY/uv7VsRrb9RwKRN8u6vq5mPzjcQ6+JH+cke8vFbNvhzH35m1OTxUeZRNWo7N8ZYq9ULX6Xm//Vfl1mELju8FdAXFjOolW/nFWYn640pGMhMoPA4r7xeRRG1C7hbyiwJmArFS5mgRWMsJjvjyyJ35dtwSKFBdipICmQG7G6puiqTM7RD8buN8uyZ4AyCwagnmxhwy9BgyWuIWyIZqTL5dFlPwfVgSZGJM9MXJdVBEJ2tYx/TjlYL+7nM0GGDPmod/YcO2s0ySgIh7dLze6Snki3ypah/VPVlDJyZdr/jkc6K+tkeB6bWzz5JNuPA4YyTc1c+iCl0=) 2026-02-03 02:45:56.534993 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG9ZyKJkvrS0rNDx1HowAQ0YdiUETDW98nEZm2fpzlgYTA0Z96Jom9Z9cId6IRv4kRVptXrztVt5EVb4bYzYzJk=) 2026-02-03 02:45:56.535021 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA3LSWYc5x/CMpKd1vUunqIjpGEddea4yD8xgxkYGvun) 2026-02-03 02:45:56.535028 | orchestrator | 2026-02-03 02:45:56.535035 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-03 02:45:56.535042 | orchestrator | Tuesday 03 February 2026 02:45:46 +0000 (0:00:01.124) 0:00:14.499 ****** 2026-02-03 02:45:56.535049 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-03 02:45:56.535056 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-03 02:45:56.535062 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-03 02:45:56.535068 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-03 02:45:56.535074 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-03 02:45:56.535080 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-03 02:45:56.535086 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-03 02:45:56.535092 | orchestrator | 2026-02-03 02:45:56.535098 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-03 02:45:56.535105 | orchestrator | Tuesday 03 February 2026 02:45:51 +0000 (0:00:05.361) 0:00:19.860 ****** 2026-02-03 02:45:56.535112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-03 02:45:56.535121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-03 02:45:56.535133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-03 02:45:56.535142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-03 02:45:56.535158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-03 02:45:56.535168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-03 02:45:56.535178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-03 02:45:56.535189 | orchestrator | 2026-02-03 02:45:56.535215 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:56.535228 | orchestrator | Tuesday 03 February 2026 02:45:52 +0000 (0:00:00.177) 0:00:20.038 ****** 2026-02-03 02:45:56.535248 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr7rBH42tYLUBcGM2UPVU5kFtI1w3raDXSD/p5y75aCjkIuWy9PQG34EDRNYMfXxmibflX4XdoDv1dY/pnOQM81CIRo1Vh8EG3R2X+l/LZqnKEMAc60EK00ETeG3GsyrM3n+L8fsyi/CuLT2cj6trjoxEvvBroYZWpKtN2VS+ClLe9C6U8OlJt0ydCBrLn73OIoRfoUod/7mArX2DTYp0CcUSxXPO0jRHi++8IGL6dH808B9VuVAq0TrnihLEJ0oSNpc8HbMmBmttlww41YC04x/R2giYgKKnH8CGEMbbjypU22ByZuOYyOi5lUBpgwV6Y0dyeXa1cMnktSaobuMO6rlasFAlL0AA0GajlpMDbjELBE58dv2tWB8IvsxUE05XPZ4zoAM8JdIiThxQIG8POLNscyOgWRWat9CtjfMMiQGumaJVfPHJlzeIPHODWeztVtuEy2UWR60Zihq+eeg3knJeN6GRnf1cdIw3PO6e4lAdQ8mKHB7y7UBPM36mU9Z0=) 2026-02-03 02:45:56.535259 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDw/zW4afxIhXuXZ6mBMdAuRD0HoWz67MpkUZ+kdKTx5nSCnYELIKIyRJ4tkxig2B60wffxYW4Xeux4zpU64bnU=) 2026-02-03 02:45:56.535272 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJmA1lQeSH0IdK//Ua08xfRNWP7SAPx9/QJw0SqO7GlG) 2026-02-03 02:45:56.535278 | orchestrator | 2026-02-03 02:45:56.535285 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:56.535291 | orchestrator | Tuesday 03 February 2026 02:45:53 +0000 (0:00:01.159) 0:00:21.197 ****** 2026-02-03 02:45:56.535298 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAKsSYwyuRUnz5XaGWNwL2+mqY+cf+IfXarofGDVWg2Q) 2026-02-03 02:45:56.535304 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4ccoD9BFd9lfoKFZgeMXwBKIQBEQMAw0ACv8Cn113Mad/EaKxy8z8+MHRm7TZxt7YIoiqv3a5hpNW9X0qv+dd8EGM+sGLJWOh27UMmmjPdMhjJO1uaOXmG4X2F1NahwTCmaFMbXnFCaPOoS2BhA9PQpNf47ustH7UN3pgBM/bghO/BchB1WAVNgfOGj26Ex4Q7NyshudcY+IOKfunouSJku0vQs7aXiqn67YDDpXPNk5muSLDRo3qRbzusmIUufUNed1pbofdE1MfrpAfGaf6KWBiSZYx5VwgePvXvJ+muwPccwkyvuxgZv9oKnLIibkXnZfAgCiQqjIBWLiozvknB6H+khqFrtikiUPt+RZN7rcDxj9ouyzYQbEA/meyybtJhpYkWciTnamKBF9pxaFhLeAHVgpsRZUx+qv3Rz5qm0WyfT7m0sn2C4E2HxrLp/wC9vBrWHKVdlSz8Am8EBqr3drnGQxCXHZn59EeD4OR4VTIKn9h2omVxyb3mEjqtiU=) 2026-02-03 02:45:56.535310 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBICUOJu5Xl50ZN8k5ChGEiuyPzKJis/MVgPW+cgE3gbVbagBkv8prg4inwZAGC0QE4dov05RLuv2ipX/VvMDx3g=) 2026-02-03 02:45:56.535316 | orchestrator | 2026-02-03 02:45:56.535323 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:56.535329 | orchestrator | Tuesday 03 February 2026 02:45:54 +0000 (0:00:01.159) 0:00:22.357 ****** 2026-02-03 02:45:56.535335 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDy1BbqV8geTYEkZ5gskBM+Utkxv47ihc0sVgVm2oDqXlAu2IVmCG2tWkiBlwuzNybZtLVaYMzh3DAPhIgEAR7xWXa3k07l0M9aKkL0fcxqmM9lUwP93Cf+4zLQ3Kx6lOIE3uabnfW87OZtAX5dlV7UhnyoUqXzpCtotwq96p0XhTiEC+SRMGhkCYj402ZIyQnYqpLysG0CpSGTYBrzkelFnhPbT1k491cLCzCG0GfIoe/WRymXszBlHLJbxX2olZtYDk4bpKC9DEiovRHtlQHMQx+WNZrX6i1i3Om22Dlo2BXIY9RDxVzbDm79JOVwt67sy89gbrhdR6RUW3ym8sBPbzJPq2/tnsIg751l+5mBnfezedfI3php6P7q0hO7VY3f4mP6xK9qhnicmXEsfSXZfJ7PfzsXp5wSWNIeahiUkOCJ21RZfq7hgOUaZafjVBNQcyAC6upzM/UNnUV0L1hmV6YAlTbT6AwR5I7YoqOTX86Q7c5pOse0KbD1f6Usq3c=) 2026-02-03 02:45:56.535343 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLCRAmyoiftcobnxdKacxC36Rps7DNFCk8DfSPCkYQwZj+Ec8urRGmadtilQWZxqS15PJhiYOlMkTGC3sZaNDFs=) 2026-02-03 02:45:56.535354 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ1qRHslHs6+86uU3/hF5b4gazxdoCvT2Xxd0IRDgPEu) 2026-02-03 02:45:56.535387 | orchestrator | 2026-02-03 02:45:56.535399 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:45:56.535411 | orchestrator | Tuesday 03 February 2026 02:45:55 +0000 (0:00:01.107) 0:00:23.464 ****** 2026-02-03 02:45:56.535432 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQsepxt7zrtoYmFseSrnUAR9iwI1BQYDaRGr6AN3MzCX3/z4KFSw4f9iH/V20QeXmvT9Q/p/0t/FiqAEQzBXLLahVPPkrE8eqfrCaQZC6fooqzZibyWse5pMErZVgE6iNx0SgPUbMexA0PjLmF/yrS9IYS4b3fRM6/Zs/ZnHaYw0FQM6GBOxT60qXR4rnKXB0wenOVHOGyP7vWAdqDOfH4THPblo9qhsZASRQFtM+8pefO8Pj9GVh+GTvHZU/JQ06JM+4K7I/wWC8zNpPXhBcJsksnMoSKy2eG0AwavO0NpmTu7pkPhJSAv5qXN3TjKIUnYRhLufYVE6/NBXiUl1y1R5XdcleI4QHzek4wEH4IrPfr6hHEm62J5Oep9JzP4O12Xh0eTcKIMBFY1uyZ+gdjHzABzV3kIibui55hpk3FWcMMJ0hUQ+CeDp+Pkcm+UQ/HCCqX7SC2WZ8D/JEZMwmVKxeFP1sN0wD54lLB+A57Be+j5kHATs5k7UjijvEhl6U=) 2026-02-03 02:46:01.150258 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE3HrclHAMTR7d8piNQtwNRf5ilRYhTZ/VimXSLXIJ7kFi1e2ob3L7I2HbwIzSNXMYhuQ+m+qIo3mBCYVHn1mHQ=) 2026-02-03 02:46:01.150330 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBQvsy6acJpznKPv3rAsa0Rh5ZyrizGtNv9C6Rf4oQWL) 2026-02-03 02:46:01.150359 | orchestrator | 2026-02-03 02:46:01.150367 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:46:01.150433 | orchestrator | Tuesday 03 February 2026 02:45:56 +0000 (0:00:01.069) 0:00:24.534 ****** 2026-02-03 02:46:01.150441 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTbYZVfnCUvVveUpSF2BRHLZ7uWXXFWo0eeh2tpBh+YyRFxG/XFDnsEEiEwtvz55f3DnDXmPdsMCsF8D3wAl6LFu4CjFI1V8JSERTTonYHmLTt2tlHB1sgfkat8ruaQufGZo41sjkL8yUczgrf1lYMA0jq7Mz4+rl6EgC2rKWQ+K9DlkKEfIPuv35sfSc6FodkdeAoRxAlxOX3T4CODqmJRjFC3v0vHP/1w2uHDLNklyXF7ZnskXpcQxL/ex7pMpbquiclKDvpPXJRoEzoyOfByle16MIpIece4hgh+ncPJ6we3X7g+GeSViPl+1THloOz/t4iF4KMaISWaCx2nF6KMBlUAhRxihCXrQoZge6TTmepU+bFLA9I8h9BiSb7HOCE2uT4nU4HYMtvvnH37ymEZIBMrCwQR1Who2ZOf/5ggadDMKWn/r3ptYRFvrAKGvSqdUpwuyxgESeYkN8x72iMnSMEnFC1VJuxaXXku1ukDl8lbGXzNJKO49YbMLhIles=) 2026-02-03 02:46:01.150449 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAczUhjknr8xhi/XeUponfBc2ThzCdssoQBUa1P9vouQYwlP+tU/6STWTChhfHwFWxR0kjseRdnE53XdO04BNCU=) 2026-02-03 02:46:01.150455 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGNHetrHNLN70fuN3ptnUj9QAJWsZYcGcMqzgkJYDoIA) 2026-02-03 02:46:01.150461 | orchestrator | 2026-02-03 02:46:01.150466 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:46:01.150472 | orchestrator | Tuesday 03 February 2026 02:45:57 +0000 (0:00:01.100) 0:00:25.634 ****** 2026-02-03 02:46:01.150477 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCEQsxUNZnw0f0KT8uIAC0yRrLGROpBTq8LwEpORdxbTf9X80ZWlSXM1onYEURyvGs3a8tDUyDMQDjo6FM2fCMQMnVcAxWYyrpROZlI+ubF/1ensD0UcTFHGG+2wgVeQhQl0J3KSUFGAsdqLBErLXbKC4aw53p6ig5HYGzvstkSqS2nYU92O45QLMDpxIWIp+qNYBLYL0TCm+GwQTQugUCtzVPJHN58gYLpmYhJq+/cOPuLQc3Ovfwx7RwiAGU+MTympkZ7KSisTDTtCo3w3WsQnuGH2iW5CCIMHsznrqs7Ts7knQf0AZbVHRKzcMQrV9cQclYQrumIx+za+ZobD5DH7y9MLQwCfInBfqa+Q6yPnTHUBXVt41G3NR88B37xxGgVkTs/UUSbIkkzikREwwXmxFLVPXE8wRPiD8c3PACLqHDdCc8DPM1En/AhdlyxQlygzLk/px61WyU0S8obIDHEduz+GcSlyw2z83F9gCoONrVkNtI+bnISDbCHzr9YVuU=) 2026-02-03 02:46:01.150483 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO0prrY3NW4G+6vwFRwTgG7dz521VpT5pgpaAxXiARrotIBpdPpAXSt98TGlwdciS7U7kuVsZ6hXmaG3bsvPP9w=) 2026-02-03 02:46:01.150489 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6Robt19oPZbyEp9QGZYfzGiZ8DgA1AAA3ssObPyWvN) 2026-02-03 02:46:01.150494 | orchestrator | 2026-02-03 02:46:01.150500 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-03 02:46:01.150505 | orchestrator | Tuesday 03 February 2026 02:45:58 +0000 (0:00:01.094) 0:00:26.729 ****** 2026-02-03 02:46:01.150524 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDyJmIcGRZHHMrDT48BBlka8/HvU6ZuUIy42yGcxYPRwMHhbqRqqv+wGl+Zm46/2mAtZvfZ7LIkaBRjEl/xcjQuqJhJ6tALgrgpV0om1PhBEGHvz32jfC+O3C+6+aunQwz9eYnvCmT0xL0k0wDuzsGYB0dPn1UYOvcEy64oiekzICjzPBaWyj8Z78vY/uv7VsRrb9RwKRN8u6vq5mPzjcQ6+JH+cke8vFbNvhzH35m1OTxUeZRNWo7N8ZYq9ULX6Xm//Vfl1mELju8FdAXFjOolW/nFWYn640pGMhMoPA4r7xeRRG1C7hbyiwJmArFS5mgRWMsJjvjyyJ35dtwSKFBdipICmQG7G6puiqTM7RD8buN8uyZ4AyCwagnmxhwy9BgyWuIWyIZqTL5dFlPwfVgSZGJM9MXJdVBEJ2tYx/TjlYL+7nM0GGDPmod/YcO2s0ySgIh7dLze6Snki3ypah/VPVlDJyZdr/jkc6K+tkeB6bWzz5JNuPA4YyTc1c+iCl0=) 2026-02-03 02:46:01.150531 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG9ZyKJkvrS0rNDx1HowAQ0YdiUETDW98nEZm2fpzlgYTA0Z96Jom9Z9cId6IRv4kRVptXrztVt5EVb4bYzYzJk=) 2026-02-03 02:46:01.150536 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA3LSWYc5x/CMpKd1vUunqIjpGEddea4yD8xgxkYGvun) 2026-02-03 02:46:01.150542 | orchestrator | 2026-02-03 02:46:01.150547 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-03 02:46:01.150558 | orchestrator | Tuesday 03 February 2026 02:45:59 +0000 (0:00:01.118) 0:00:27.848 ****** 2026-02-03 02:46:01.150565 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-03 02:46:01.150571 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-03 02:46:01.150589 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-03 02:46:01.150595 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-03 02:46:01.150600 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-03 02:46:01.150606 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-03 02:46:01.150611 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-03 02:46:01.150617 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:46:01.150623 | orchestrator | 2026-02-03 02:46:01.150629 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-03 02:46:01.150634 | orchestrator | Tuesday 03 February 2026 02:46:00 +0000 (0:00:00.192) 0:00:28.040 ****** 2026-02-03 02:46:01.150640 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:46:01.150645 | orchestrator | 2026-02-03 02:46:01.150651 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-03 02:46:01.150660 | orchestrator | Tuesday 03 February 2026 02:46:00 +0000 (0:00:00.066) 0:00:28.106 ****** 2026-02-03 02:46:01.150665 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:46:01.150671 | orchestrator | 2026-02-03 02:46:01.150677 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-03 02:46:01.150682 | orchestrator | Tuesday 03 February 2026 02:46:00 +0000 (0:00:00.055) 0:00:28.162 ****** 2026-02-03 02:46:01.150688 | orchestrator | changed: [testbed-manager] 2026-02-03 02:46:01.150693 | orchestrator | 2026-02-03 02:46:01.150699 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:46:01.150704 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 02:46:01.150711 | orchestrator | 2026-02-03 02:46:01.150716 | orchestrator | 2026-02-03 02:46:01.150722 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 02:46:01.150727 | orchestrator | Tuesday 03 February 2026 02:46:00 +0000 (0:00:00.750) 0:00:28.912 ****** 2026-02-03 02:46:01.150733 | orchestrator | =============================================================================== 2026-02-03 02:46:01.150738 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.16s 2026-02-03 02:46:01.150744 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.36s 2026-02-03 02:46:01.150750 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2026-02-03 02:46:01.150755 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-03 02:46:01.150761 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-03 02:46:01.150766 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-03 02:46:01.150771 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-03 02:46:01.150777 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-03 02:46:01.150782 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-03 02:46:01.150788 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-03 02:46:01.150793 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-03 02:46:01.150799 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-03 02:46:01.150804 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-03 02:46:01.150809 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-03 02:46:01.150819 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-03 02:46:01.150824 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-03 02:46:01.150830 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2026-02-03 02:46:01.150835 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-02-03 02:46:01.150841 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-02-03 02:46:01.150847 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-02-03 02:46:01.475447 | orchestrator | + osism apply squid 2026-02-03 02:46:13.674833 | orchestrator | 2026-02-03 02:46:13 | INFO  | Task 63e52441-db09-447a-952a-db067696d8ce (squid) was prepared for execution. 2026-02-03 02:46:13.674938 | orchestrator | 2026-02-03 02:46:13 | INFO  | It takes a moment until task 63e52441-db09-447a-952a-db067696d8ce (squid) has been started and output is visible here. 2026-02-03 02:48:08.660802 | orchestrator | 2026-02-03 02:48:08.660902 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-03 02:48:08.660915 | orchestrator | 2026-02-03 02:48:08.660923 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-03 02:48:08.660928 | orchestrator | Tuesday 03 February 2026 02:46:17 +0000 (0:00:00.163) 0:00:00.163 ****** 2026-02-03 02:48:08.660934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 02:48:08.660942 | orchestrator | 2026-02-03 02:48:08.660950 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-03 02:48:08.660958 | orchestrator | Tuesday 03 February 2026 02:46:18 +0000 (0:00:00.093) 0:00:00.257 ****** 2026-02-03 02:48:08.660965 | orchestrator | ok: [testbed-manager] 2026-02-03 02:48:08.660973 | orchestrator | 2026-02-03 02:48:08.660978 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-03 02:48:08.660983 | orchestrator | Tuesday 03 February 2026 02:46:19 +0000 (0:00:01.555) 0:00:01.812 ****** 2026-02-03 02:48:08.660989 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-03 02:48:08.660993 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-03 02:48:08.660999 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-03 02:48:08.661003 | orchestrator | 2026-02-03 02:48:08.661008 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-03 02:48:08.661013 | orchestrator | Tuesday 03 February 2026 02:46:20 +0000 (0:00:01.171) 0:00:02.983 ****** 2026-02-03 02:48:08.661017 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-03 02:48:08.661021 | orchestrator | 2026-02-03 02:48:08.661026 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-03 02:48:08.661030 | orchestrator | Tuesday 03 February 2026 02:46:21 +0000 (0:00:01.111) 0:00:04.095 ****** 2026-02-03 02:48:08.661036 | orchestrator | ok: [testbed-manager] 2026-02-03 02:48:08.661043 | orchestrator | 2026-02-03 02:48:08.661050 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-03 02:48:08.661057 | orchestrator | Tuesday 03 February 2026 02:46:22 +0000 (0:00:00.366) 0:00:04.461 ****** 2026-02-03 02:48:08.661065 | orchestrator | changed: [testbed-manager] 2026-02-03 02:48:08.661072 | orchestrator | 2026-02-03 02:48:08.661080 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-03 02:48:08.661087 | orchestrator | Tuesday 03 February 2026 02:46:23 +0000 (0:00:00.940) 0:00:05.402 ****** 2026-02-03 02:48:08.661094 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-03 02:48:08.661105 | orchestrator | ok: [testbed-manager] 2026-02-03 02:48:08.661112 | orchestrator | 2026-02-03 02:48:08.661119 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-03 02:48:08.661151 | orchestrator | Tuesday 03 February 2026 02:46:55 +0000 (0:00:32.566) 0:00:37.969 ****** 2026-02-03 02:48:08.661159 | orchestrator | changed: [testbed-manager] 2026-02-03 02:48:08.661167 | orchestrator | 2026-02-03 02:48:08.661174 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-03 02:48:08.661182 | orchestrator | Tuesday 03 February 2026 02:47:07 +0000 (0:00:11.835) 0:00:49.804 ****** 2026-02-03 02:48:08.661190 | orchestrator | Pausing for 60 seconds 2026-02-03 02:48:08.661198 | orchestrator | changed: [testbed-manager] 2026-02-03 02:48:08.661206 | orchestrator | 2026-02-03 02:48:08.661213 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-03 02:48:08.661220 | orchestrator | Tuesday 03 February 2026 02:48:07 +0000 (0:01:00.088) 0:01:49.892 ****** 2026-02-03 02:48:08.661228 | orchestrator | ok: [testbed-manager] 2026-02-03 02:48:08.661234 | orchestrator | 2026-02-03 02:48:08.661242 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-03 02:48:08.661247 | orchestrator | Tuesday 03 February 2026 02:48:07 +0000 (0:00:00.079) 0:01:49.972 ****** 2026-02-03 02:48:08.661252 | orchestrator | changed: [testbed-manager] 2026-02-03 02:48:08.661256 | orchestrator | 2026-02-03 02:48:08.661261 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:48:08.661265 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:48:08.661269 | orchestrator | 2026-02-03 02:48:08.661274 | orchestrator | 2026-02-03 02:48:08.661278 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 02:48:08.661283 | orchestrator | Tuesday 03 February 2026 02:48:08 +0000 (0:00:00.639) 0:01:50.611 ****** 2026-02-03 02:48:08.661287 | orchestrator | =============================================================================== 2026-02-03 02:48:08.661308 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-03 02:48:08.661316 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.57s 2026-02-03 02:48:08.661321 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.84s 2026-02-03 02:48:08.661325 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.56s 2026-02-03 02:48:08.661330 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2026-02-03 02:48:08.661334 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.11s 2026-02-03 02:48:08.661338 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.94s 2026-02-03 02:48:08.661343 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-02-03 02:48:08.661349 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-02-03 02:48:08.661356 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-02-03 02:48:08.661363 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-02-03 02:48:08.973949 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-03 02:48:08.974181 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-03 02:48:09.036663 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-03 02:48:09.036733 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-03 02:48:09.044666 | orchestrator | + set -e 2026-02-03 02:48:09.044770 | orchestrator | + NAMESPACE=kolla/release 2026-02-03 02:48:09.044781 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-03 02:48:09.052109 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-03 02:48:09.129077 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-03 02:48:09.129981 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-03 02:48:21.159423 | orchestrator | 2026-02-03 02:48:21 | INFO  | Task 3045641f-5656-41a8-8a3e-92966b5e2d25 (operator) was prepared for execution. 2026-02-03 02:48:21.159582 | orchestrator | 2026-02-03 02:48:21 | INFO  | It takes a moment until task 3045641f-5656-41a8-8a3e-92966b5e2d25 (operator) has been started and output is visible here. 2026-02-03 02:48:38.325978 | orchestrator | 2026-02-03 02:48:38.326196 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-03 02:48:38.326225 | orchestrator | 2026-02-03 02:48:38.326239 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 02:48:38.326250 | orchestrator | Tuesday 03 February 2026 02:48:25 +0000 (0:00:00.144) 0:00:00.144 ****** 2026-02-03 02:48:38.326262 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:48:38.326274 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:48:38.326285 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:48:38.326295 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:48:38.326306 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:48:38.326316 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:48:38.326327 | orchestrator | 2026-02-03 02:48:38.326338 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-03 02:48:38.326349 | orchestrator | Tuesday 03 February 2026 02:48:29 +0000 (0:00:04.351) 0:00:04.496 ****** 2026-02-03 02:48:38.326360 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:48:38.326371 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:48:38.326382 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:48:38.326409 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:48:38.326420 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:48:38.326430 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:48:38.326446 | orchestrator | 2026-02-03 02:48:38.326465 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-03 02:48:38.326483 | orchestrator | 2026-02-03 02:48:38.326502 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-03 02:48:38.326551 | orchestrator | Tuesday 03 February 2026 02:48:30 +0000 (0:00:00.778) 0:00:05.274 ****** 2026-02-03 02:48:38.326571 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:48:38.326592 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:48:38.326610 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:48:38.326630 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:48:38.326645 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:48:38.326658 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:48:38.326671 | orchestrator | 2026-02-03 02:48:38.326684 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-03 02:48:38.326696 | orchestrator | Tuesday 03 February 2026 02:48:30 +0000 (0:00:00.168) 0:00:05.443 ****** 2026-02-03 02:48:38.326709 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:48:38.326722 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:48:38.326734 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:48:38.326747 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:48:38.326760 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:48:38.326772 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:48:38.326785 | orchestrator | 2026-02-03 02:48:38.326798 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-03 02:48:38.326810 | orchestrator | Tuesday 03 February 2026 02:48:30 +0000 (0:00:00.177) 0:00:05.621 ****** 2026-02-03 02:48:38.326824 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:48:38.326838 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:48:38.326858 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:48:38.326877 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:48:38.326895 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:48:38.326914 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:48:38.326932 | orchestrator | 2026-02-03 02:48:38.326953 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-03 02:48:38.326972 | orchestrator | Tuesday 03 February 2026 02:48:31 +0000 (0:00:00.640) 0:00:06.262 ****** 2026-02-03 02:48:38.326990 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:48:38.327005 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:48:38.327016 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:48:38.327027 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:48:38.327038 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:48:38.327049 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:48:38.327083 | orchestrator | 2026-02-03 02:48:38.327095 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-03 02:48:38.327106 | orchestrator | Tuesday 03 February 2026 02:48:32 +0000 (0:00:00.827) 0:00:07.090 ****** 2026-02-03 02:48:38.327117 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-03 02:48:38.327128 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-03 02:48:38.327139 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-03 02:48:38.327150 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-03 02:48:38.327161 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-03 02:48:38.327171 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-03 02:48:38.327182 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-03 02:48:38.327193 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-03 02:48:38.327204 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-03 02:48:38.327219 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-03 02:48:38.327237 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-03 02:48:38.327255 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-03 02:48:38.327273 | orchestrator | 2026-02-03 02:48:38.327292 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-03 02:48:38.327311 | orchestrator | Tuesday 03 February 2026 02:48:33 +0000 (0:00:01.253) 0:00:08.343 ****** 2026-02-03 02:48:38.327331 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:48:38.327349 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:48:38.327365 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:48:38.327376 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:48:38.327387 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:48:38.327397 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:48:38.327408 | orchestrator | 2026-02-03 02:48:38.327420 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-03 02:48:38.327431 | orchestrator | Tuesday 03 February 2026 02:48:34 +0000 (0:00:01.319) 0:00:09.662 ****** 2026-02-03 02:48:38.327442 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-03 02:48:38.327454 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-03 02:48:38.327464 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-03 02:48:38.327475 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-03 02:48:38.327528 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-03 02:48:38.327546 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-03 02:48:38.327557 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-03 02:48:38.327568 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-03 02:48:38.327579 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-03 02:48:38.327597 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-03 02:48:38.327615 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-03 02:48:38.327633 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-03 02:48:38.327652 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-03 02:48:38.327670 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-03 02:48:38.327689 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-03 02:48:38.327707 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-03 02:48:38.327725 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-03 02:48:38.327739 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-03 02:48:38.327750 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-03 02:48:38.327761 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-03 02:48:38.327784 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-03 02:48:38.327794 | orchestrator | 2026-02-03 02:48:38.327805 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-03 02:48:38.327817 | orchestrator | Tuesday 03 February 2026 02:48:36 +0000 (0:00:01.322) 0:00:10.985 ****** 2026-02-03 02:48:38.327828 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:48:38.327839 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:48:38.327850 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:48:38.327860 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:48:38.327871 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:48:38.327882 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:48:38.327893 | orchestrator | 2026-02-03 02:48:38.327904 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-03 02:48:38.327915 | orchestrator | Tuesday 03 February 2026 02:48:36 +0000 (0:00:00.176) 0:00:11.161 ****** 2026-02-03 02:48:38.327926 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:48:38.327936 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:48:38.327947 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:48:38.327965 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:48:38.327984 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:48:38.328002 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:48:38.328020 | orchestrator | 2026-02-03 02:48:38.328039 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-03 02:48:38.328058 | orchestrator | Tuesday 03 February 2026 02:48:36 +0000 (0:00:00.207) 0:00:11.369 ****** 2026-02-03 02:48:38.328077 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:48:38.328096 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:48:38.328114 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:48:38.328130 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:48:38.328141 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:48:38.328151 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:48:38.328162 | orchestrator | 2026-02-03 02:48:38.328173 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-03 02:48:38.328183 | orchestrator | Tuesday 03 February 2026 02:48:37 +0000 (0:00:00.583) 0:00:11.953 ****** 2026-02-03 02:48:38.328194 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:48:38.328205 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:48:38.328215 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:48:38.328226 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:48:38.328247 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:48:38.328258 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:48:38.328269 | orchestrator | 2026-02-03 02:48:38.328279 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-03 02:48:38.328290 | orchestrator | Tuesday 03 February 2026 02:48:37 +0000 (0:00:00.176) 0:00:12.130 ****** 2026-02-03 02:48:38.328301 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-03 02:48:38.328312 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:48:38.328323 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-03 02:48:38.328340 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:48:38.328358 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-03 02:48:38.328376 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-03 02:48:38.328395 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-03 02:48:38.328414 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:48:38.328433 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:48:38.328450 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:48:38.328467 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-03 02:48:38.328479 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:48:38.328489 | orchestrator | 2026-02-03 02:48:38.328500 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-03 02:48:38.328533 | orchestrator | Tuesday 03 February 2026 02:48:37 +0000 (0:00:00.714) 0:00:12.844 ****** 2026-02-03 02:48:38.328555 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:48:38.328573 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:48:38.328590 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:48:38.328607 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:48:38.328625 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:48:38.328643 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:48:38.328660 | orchestrator | 2026-02-03 02:48:38.328677 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-03 02:48:38.328697 | orchestrator | Tuesday 03 February 2026 02:48:38 +0000 (0:00:00.163) 0:00:13.008 ****** 2026-02-03 02:48:38.328716 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:48:38.328734 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:48:38.328753 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:48:38.328771 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:48:38.328802 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:48:39.693401 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:48:39.693592 | orchestrator | 2026-02-03 02:48:39.693625 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-03 02:48:39.693648 | orchestrator | Tuesday 03 February 2026 02:48:38 +0000 (0:00:00.185) 0:00:13.194 ****** 2026-02-03 02:48:39.693667 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:48:39.693687 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:48:39.693707 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:48:39.693727 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:48:39.693745 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:48:39.693765 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:48:39.693811 | orchestrator | 2026-02-03 02:48:39.693831 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-03 02:48:39.693850 | orchestrator | Tuesday 03 February 2026 02:48:38 +0000 (0:00:00.179) 0:00:13.373 ****** 2026-02-03 02:48:39.693869 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:48:39.693887 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:48:39.693929 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:48:39.693951 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:48:39.693971 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:48:39.693992 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:48:39.694010 | orchestrator | 2026-02-03 02:48:39.694108 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-03 02:48:39.694131 | orchestrator | Tuesday 03 February 2026 02:48:39 +0000 (0:00:00.684) 0:00:14.057 ****** 2026-02-03 02:48:39.694153 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:48:39.694172 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:48:39.694193 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:48:39.694215 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:48:39.694236 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:48:39.694255 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:48:39.694275 | orchestrator | 2026-02-03 02:48:39.694296 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:48:39.694316 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 02:48:39.694337 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 02:48:39.694357 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 02:48:39.694375 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 02:48:39.694393 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 02:48:39.694442 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 02:48:39.694463 | orchestrator | 2026-02-03 02:48:39.694480 | orchestrator | 2026-02-03 02:48:39.694499 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 02:48:39.694546 | orchestrator | Tuesday 03 February 2026 02:48:39 +0000 (0:00:00.250) 0:00:14.308 ****** 2026-02-03 02:48:39.694564 | orchestrator | =============================================================================== 2026-02-03 02:48:39.694583 | orchestrator | Gathering Facts --------------------------------------------------------- 4.35s 2026-02-03 02:48:39.694600 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.32s 2026-02-03 02:48:39.694620 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.32s 2026-02-03 02:48:39.694638 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2026-02-03 02:48:39.694657 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2026-02-03 02:48:39.694675 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-02-03 02:48:39.694695 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-02-03 02:48:39.694714 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2026-02-03 02:48:39.694732 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-02-03 02:48:39.694749 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2026-02-03 02:48:39.694768 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-02-03 02:48:39.694786 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-02-03 02:48:39.694805 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-02-03 02:48:39.694822 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-02-03 02:48:39.694841 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-02-03 02:48:39.694859 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-02-03 02:48:39.694877 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-02-03 02:48:39.694895 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-02-03 02:48:39.694912 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-02-03 02:48:39.999702 | orchestrator | + osism apply --environment custom facts 2026-02-03 02:48:42.052995 | orchestrator | 2026-02-03 02:48:42 | INFO  | Trying to run play facts in environment custom 2026-02-03 02:48:52.229953 | orchestrator | 2026-02-03 02:48:52 | INFO  | Task 3216a2b7-d133-4870-ab1f-0f73936d1fe2 (facts) was prepared for execution. 2026-02-03 02:48:52.230095 | orchestrator | 2026-02-03 02:48:52 | INFO  | It takes a moment until task 3216a2b7-d133-4870-ab1f-0f73936d1fe2 (facts) has been started and output is visible here. 2026-02-03 02:49:36.570985 | orchestrator | 2026-02-03 02:49:36.571075 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-03 02:49:36.571086 | orchestrator | 2026-02-03 02:49:36.571095 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-03 02:49:36.571104 | orchestrator | Tuesday 03 February 2026 02:48:56 +0000 (0:00:00.086) 0:00:00.086 ****** 2026-02-03 02:49:36.571112 | orchestrator | ok: [testbed-manager] 2026-02-03 02:49:36.571121 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:49:36.571130 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:49:36.571137 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:49:36.571145 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:49:36.571152 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:49:36.571177 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:49:36.571184 | orchestrator | 2026-02-03 02:49:36.571193 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-03 02:49:36.571200 | orchestrator | Tuesday 03 February 2026 02:48:57 +0000 (0:00:01.379) 0:00:01.465 ****** 2026-02-03 02:49:36.571207 | orchestrator | ok: [testbed-manager] 2026-02-03 02:49:36.571214 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:49:36.571221 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:49:36.571228 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:49:36.571235 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:49:36.571242 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:49:36.571249 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:49:36.571256 | orchestrator | 2026-02-03 02:49:36.571263 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-03 02:49:36.571269 | orchestrator | 2026-02-03 02:49:36.571276 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-03 02:49:36.571283 | orchestrator | Tuesday 03 February 2026 02:48:58 +0000 (0:00:01.103) 0:00:02.568 ****** 2026-02-03 02:49:36.571290 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:49:36.571298 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:49:36.571305 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:49:36.571312 | orchestrator | 2026-02-03 02:49:36.571318 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-03 02:49:36.571326 | orchestrator | Tuesday 03 February 2026 02:48:58 +0000 (0:00:00.099) 0:00:02.668 ****** 2026-02-03 02:49:36.571333 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:49:36.571341 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:49:36.571348 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:49:36.571355 | orchestrator | 2026-02-03 02:49:36.571362 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-03 02:49:36.571370 | orchestrator | Tuesday 03 February 2026 02:48:59 +0000 (0:00:00.212) 0:00:02.880 ****** 2026-02-03 02:49:36.571377 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:49:36.571384 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:49:36.571391 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:49:36.571398 | orchestrator | 2026-02-03 02:49:36.571406 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-03 02:49:36.571414 | orchestrator | Tuesday 03 February 2026 02:48:59 +0000 (0:00:00.221) 0:00:03.101 ****** 2026-02-03 02:49:36.571422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 02:49:36.571430 | orchestrator | 2026-02-03 02:49:36.571437 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-03 02:49:36.571444 | orchestrator | Tuesday 03 February 2026 02:48:59 +0000 (0:00:00.152) 0:00:03.254 ****** 2026-02-03 02:49:36.571452 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:49:36.571458 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:49:36.571465 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:49:36.571472 | orchestrator | 2026-02-03 02:49:36.571480 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-03 02:49:36.571487 | orchestrator | Tuesday 03 February 2026 02:48:59 +0000 (0:00:00.473) 0:00:03.728 ****** 2026-02-03 02:49:36.571494 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:49:36.571502 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:49:36.571509 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:49:36.571516 | orchestrator | 2026-02-03 02:49:36.571523 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-03 02:49:36.571530 | orchestrator | Tuesday 03 February 2026 02:49:00 +0000 (0:00:00.130) 0:00:03.858 ****** 2026-02-03 02:49:36.571537 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:49:36.571544 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:49:36.571582 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:49:36.571589 | orchestrator | 2026-02-03 02:49:36.571597 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-03 02:49:36.571610 | orchestrator | Tuesday 03 February 2026 02:49:01 +0000 (0:00:01.114) 0:00:04.973 ****** 2026-02-03 02:49:36.571618 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:49:36.571624 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:49:36.571632 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:49:36.571637 | orchestrator | 2026-02-03 02:49:36.571641 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-03 02:49:36.571668 | orchestrator | Tuesday 03 February 2026 02:49:01 +0000 (0:00:00.460) 0:00:05.434 ****** 2026-02-03 02:49:36.571673 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:49:36.571677 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:49:36.571682 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:49:36.571686 | orchestrator | 2026-02-03 02:49:36.571690 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-03 02:49:36.571695 | orchestrator | Tuesday 03 February 2026 02:49:02 +0000 (0:00:01.112) 0:00:06.547 ****** 2026-02-03 02:49:36.571699 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:49:36.571704 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:49:36.571708 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:49:36.571712 | orchestrator | 2026-02-03 02:49:36.571716 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-03 02:49:36.571721 | orchestrator | Tuesday 03 February 2026 02:49:19 +0000 (0:00:16.391) 0:00:22.938 ****** 2026-02-03 02:49:36.571725 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:49:36.571730 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:49:36.571734 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:49:36.571738 | orchestrator | 2026-02-03 02:49:36.571743 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-03 02:49:36.571759 | orchestrator | Tuesday 03 February 2026 02:49:19 +0000 (0:00:00.113) 0:00:23.051 ****** 2026-02-03 02:49:36.571764 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:49:36.571768 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:49:36.571772 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:49:36.571777 | orchestrator | 2026-02-03 02:49:36.571784 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-03 02:49:36.571789 | orchestrator | Tuesday 03 February 2026 02:49:27 +0000 (0:00:08.146) 0:00:31.198 ****** 2026-02-03 02:49:36.571793 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:49:36.571797 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:49:36.571802 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:49:36.571806 | orchestrator | 2026-02-03 02:49:36.571810 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-03 02:49:36.571815 | orchestrator | Tuesday 03 February 2026 02:49:27 +0000 (0:00:00.475) 0:00:31.673 ****** 2026-02-03 02:49:36.571819 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-03 02:49:36.571824 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-03 02:49:36.571829 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-03 02:49:36.571833 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-03 02:49:36.571837 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-03 02:49:36.571842 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-03 02:49:36.571846 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-03 02:49:36.571850 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-03 02:49:36.571855 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-03 02:49:36.571859 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-03 02:49:36.571864 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-03 02:49:36.571868 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-03 02:49:36.571872 | orchestrator | 2026-02-03 02:49:36.571877 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-03 02:49:36.571886 | orchestrator | Tuesday 03 February 2026 02:49:31 +0000 (0:00:03.520) 0:00:35.193 ****** 2026-02-03 02:49:36.571890 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:49:36.571894 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:49:36.571899 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:49:36.571903 | orchestrator | 2026-02-03 02:49:36.571907 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-03 02:49:36.571912 | orchestrator | 2026-02-03 02:49:36.571916 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-03 02:49:36.571921 | orchestrator | Tuesday 03 February 2026 02:49:32 +0000 (0:00:01.375) 0:00:36.569 ****** 2026-02-03 02:49:36.571925 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:49:36.571929 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:49:36.571934 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:49:36.571938 | orchestrator | ok: [testbed-manager] 2026-02-03 02:49:36.571943 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:49:36.571947 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:49:36.571951 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:49:36.571956 | orchestrator | 2026-02-03 02:49:36.571960 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:49:36.571965 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:49:36.571970 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:49:36.571976 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:49:36.571981 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:49:36.571985 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 02:49:36.571990 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 02:49:36.571994 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 02:49:36.571999 | orchestrator | 2026-02-03 02:49:36.572003 | orchestrator | 2026-02-03 02:49:36.572008 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 02:49:36.572012 | orchestrator | Tuesday 03 February 2026 02:49:36 +0000 (0:00:03.766) 0:00:40.336 ****** 2026-02-03 02:49:36.572016 | orchestrator | =============================================================================== 2026-02-03 02:49:36.572021 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.39s 2026-02-03 02:49:36.572025 | orchestrator | Install required packages (Debian) -------------------------------------- 8.15s 2026-02-03 02:49:36.572030 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.77s 2026-02-03 02:49:36.572034 | orchestrator | Copy fact files --------------------------------------------------------- 3.52s 2026-02-03 02:49:36.572038 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2026-02-03 02:49:36.572043 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.38s 2026-02-03 02:49:36.572050 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.11s 2026-02-03 02:49:36.795378 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.11s 2026-02-03 02:49:36.795505 | orchestrator | Copy fact file ---------------------------------------------------------- 1.10s 2026-02-03 02:49:36.795583 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-02-03 02:49:36.795632 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2026-02-03 02:49:36.795651 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-02-03 02:49:36.795667 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-02-03 02:49:36.795684 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-02-03 02:49:36.795701 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-02-03 02:49:36.795721 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-02-03 02:49:36.795740 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-02-03 02:49:36.795757 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-02-03 02:49:37.116643 | orchestrator | + osism apply bootstrap 2026-02-03 02:49:49.205771 | orchestrator | 2026-02-03 02:49:49 | INFO  | Task 202f22f4-b86f-43e3-9cec-9b4b0b3b13bb (bootstrap) was prepared for execution. 2026-02-03 02:49:49.205874 | orchestrator | 2026-02-03 02:49:49 | INFO  | It takes a moment until task 202f22f4-b86f-43e3-9cec-9b4b0b3b13bb (bootstrap) has been started and output is visible here. 2026-02-03 02:50:06.774649 | orchestrator | 2026-02-03 02:50:06.774741 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-03 02:50:06.774753 | orchestrator | 2026-02-03 02:50:06.774760 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-03 02:50:06.774767 | orchestrator | Tuesday 03 February 2026 02:49:53 +0000 (0:00:00.155) 0:00:00.155 ****** 2026-02-03 02:50:06.774774 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:06.774789 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:06.774796 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:06.774802 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:06.774809 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:06.774815 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:06.774822 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:06.774828 | orchestrator | 2026-02-03 02:50:06.774835 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-03 02:50:06.774842 | orchestrator | 2026-02-03 02:50:06.774848 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-03 02:50:06.774855 | orchestrator | Tuesday 03 February 2026 02:49:53 +0000 (0:00:00.273) 0:00:00.428 ****** 2026-02-03 02:50:06.774861 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:06.774867 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:06.774874 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:06.774880 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:06.774886 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:06.774893 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:06.774899 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:06.774905 | orchestrator | 2026-02-03 02:50:06.774912 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-03 02:50:06.774918 | orchestrator | 2026-02-03 02:50:06.774924 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-03 02:50:06.774931 | orchestrator | Tuesday 03 February 2026 02:49:58 +0000 (0:00:05.000) 0:00:05.429 ****** 2026-02-03 02:50:06.774938 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-03 02:50:06.774945 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-03 02:50:06.774951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-03 02:50:06.774958 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-03 02:50:06.774964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 02:50:06.774970 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-03 02:50:06.774976 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-03 02:50:06.774983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 02:50:06.774989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 02:50:06.775017 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-03 02:50:06.775024 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-03 02:50:06.775031 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 02:50:06.775037 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 02:50:06.775043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 02:50:06.775050 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-03 02:50:06.775056 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 02:50:06.775063 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 02:50:06.775069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 02:50:06.775075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 02:50:06.775082 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-03 02:50:06.775088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-03 02:50:06.775094 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:50:06.775101 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 02:50:06.775107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 02:50:06.775113 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-03 02:50:06.775120 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:50:06.775126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 02:50:06.775132 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 02:50:06.775139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-03 02:50:06.775145 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-03 02:50:06.775152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 02:50:06.775158 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-03 02:50:06.775164 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:50:06.775171 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 02:50:06.775179 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 02:50:06.775187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 02:50:06.775194 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 02:50:06.775205 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 02:50:06.775216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 02:50:06.775226 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:50:06.775237 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-03 02:50:06.775246 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 02:50:06.775255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 02:50:06.775267 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 02:50:06.775283 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 02:50:06.775293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 02:50:06.775303 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:50:06.775329 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 02:50:06.775339 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 02:50:06.775350 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 02:50:06.775377 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 02:50:06.775388 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:50:06.775399 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 02:50:06.775409 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 02:50:06.775430 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 02:50:06.775440 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:50:06.775450 | orchestrator | 2026-02-03 02:50:06.775461 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-03 02:50:06.775471 | orchestrator | 2026-02-03 02:50:06.775482 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-03 02:50:06.775494 | orchestrator | Tuesday 03 February 2026 02:49:59 +0000 (0:00:00.464) 0:00:05.893 ****** 2026-02-03 02:50:06.775505 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:06.775515 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:06.775526 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:06.775537 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:06.775547 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:06.775557 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:06.775568 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:06.775643 | orchestrator | 2026-02-03 02:50:06.775655 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-03 02:50:06.775665 | orchestrator | Tuesday 03 February 2026 02:50:00 +0000 (0:00:01.303) 0:00:07.197 ****** 2026-02-03 02:50:06.775676 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:06.775686 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:06.775697 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:06.775707 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:06.775717 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:06.775727 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:06.775737 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:06.775747 | orchestrator | 2026-02-03 02:50:06.775757 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-03 02:50:06.775768 | orchestrator | Tuesday 03 February 2026 02:50:01 +0000 (0:00:01.263) 0:00:08.461 ****** 2026-02-03 02:50:06.775780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:50:06.775792 | orchestrator | 2026-02-03 02:50:06.775803 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-03 02:50:06.775813 | orchestrator | Tuesday 03 February 2026 02:50:02 +0000 (0:00:00.284) 0:00:08.746 ****** 2026-02-03 02:50:06.775823 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:50:06.775833 | orchestrator | changed: [testbed-manager] 2026-02-03 02:50:06.775843 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:50:06.775854 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:50:06.775864 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:50:06.775874 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:50:06.775884 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:50:06.775895 | orchestrator | 2026-02-03 02:50:06.775905 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-03 02:50:06.775915 | orchestrator | Tuesday 03 February 2026 02:50:04 +0000 (0:00:02.092) 0:00:10.839 ****** 2026-02-03 02:50:06.775925 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:50:06.775937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:50:06.775950 | orchestrator | 2026-02-03 02:50:06.775961 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-03 02:50:06.775971 | orchestrator | Tuesday 03 February 2026 02:50:04 +0000 (0:00:00.270) 0:00:11.109 ****** 2026-02-03 02:50:06.775981 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:50:06.775991 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:50:06.776001 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:50:06.776011 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:50:06.776021 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:50:06.776031 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:50:06.776052 | orchestrator | 2026-02-03 02:50:06.776068 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-03 02:50:06.776079 | orchestrator | Tuesday 03 February 2026 02:50:05 +0000 (0:00:01.034) 0:00:12.144 ****** 2026-02-03 02:50:06.776089 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:50:06.776099 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:50:06.776109 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:50:06.776119 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:50:06.776129 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:50:06.776140 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:50:06.776150 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:50:06.776160 | orchestrator | 2026-02-03 02:50:06.776171 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-03 02:50:06.776181 | orchestrator | Tuesday 03 February 2026 02:50:06 +0000 (0:00:00.620) 0:00:12.764 ****** 2026-02-03 02:50:06.776191 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:50:06.776201 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:50:06.776211 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:50:06.776222 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:50:06.776233 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:50:06.776243 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:50:06.776252 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:06.776262 | orchestrator | 2026-02-03 02:50:06.776272 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-03 02:50:06.776285 | orchestrator | Tuesday 03 February 2026 02:50:06 +0000 (0:00:00.440) 0:00:13.205 ****** 2026-02-03 02:50:06.776295 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:50:06.776305 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:50:06.776324 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:50:19.427501 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:50:19.427581 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:50:19.427588 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:50:19.427614 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:50:19.427621 | orchestrator | 2026-02-03 02:50:19.427627 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-03 02:50:19.427633 | orchestrator | Tuesday 03 February 2026 02:50:06 +0000 (0:00:00.219) 0:00:13.424 ****** 2026-02-03 02:50:19.427639 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:50:19.427654 | orchestrator | 2026-02-03 02:50:19.427659 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-03 02:50:19.427664 | orchestrator | Tuesday 03 February 2026 02:50:07 +0000 (0:00:00.310) 0:00:13.734 ****** 2026-02-03 02:50:19.427669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:50:19.427673 | orchestrator | 2026-02-03 02:50:19.427678 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-03 02:50:19.427682 | orchestrator | Tuesday 03 February 2026 02:50:07 +0000 (0:00:00.287) 0:00:14.022 ****** 2026-02-03 02:50:19.427687 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:19.427692 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:19.427696 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.427700 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.427704 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.427709 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:19.427713 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.427717 | orchestrator | 2026-02-03 02:50:19.427721 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-03 02:50:19.427725 | orchestrator | Tuesday 03 February 2026 02:50:08 +0000 (0:00:01.501) 0:00:15.524 ****** 2026-02-03 02:50:19.427748 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:50:19.427753 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:50:19.427757 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:50:19.427761 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:50:19.427765 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:50:19.427769 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:50:19.427773 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:50:19.427777 | orchestrator | 2026-02-03 02:50:19.427782 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-03 02:50:19.427786 | orchestrator | Tuesday 03 February 2026 02:50:09 +0000 (0:00:00.238) 0:00:15.762 ****** 2026-02-03 02:50:19.427790 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.427794 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.427799 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.427803 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.427807 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:19.427811 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:19.427815 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:19.427819 | orchestrator | 2026-02-03 02:50:19.427823 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-03 02:50:19.427827 | orchestrator | Tuesday 03 February 2026 02:50:09 +0000 (0:00:00.555) 0:00:16.318 ****** 2026-02-03 02:50:19.427831 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:50:19.427836 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:50:19.427840 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:50:19.427844 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:50:19.427848 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:50:19.427852 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:50:19.427857 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:50:19.427861 | orchestrator | 2026-02-03 02:50:19.427865 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-03 02:50:19.427870 | orchestrator | Tuesday 03 February 2026 02:50:10 +0000 (0:00:00.335) 0:00:16.654 ****** 2026-02-03 02:50:19.427875 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.427879 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:50:19.427883 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:50:19.427887 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:50:19.427891 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:50:19.427895 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:50:19.427904 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:50:19.427909 | orchestrator | 2026-02-03 02:50:19.427913 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-03 02:50:19.427917 | orchestrator | Tuesday 03 February 2026 02:50:10 +0000 (0:00:00.547) 0:00:17.202 ****** 2026-02-03 02:50:19.427921 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.427925 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:50:19.427930 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:50:19.427934 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:50:19.427938 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:50:19.427942 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:50:19.427946 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:50:19.427950 | orchestrator | 2026-02-03 02:50:19.427954 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-03 02:50:19.427958 | orchestrator | Tuesday 03 February 2026 02:50:11 +0000 (0:00:01.201) 0:00:18.403 ****** 2026-02-03 02:50:19.427963 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.427967 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:19.427971 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.427975 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.427979 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:19.427983 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.427987 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:19.427991 | orchestrator | 2026-02-03 02:50:19.427996 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-03 02:50:19.428003 | orchestrator | Tuesday 03 February 2026 02:50:12 +0000 (0:00:01.071) 0:00:19.475 ****** 2026-02-03 02:50:19.428019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:50:19.428024 | orchestrator | 2026-02-03 02:50:19.428028 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-03 02:50:19.428032 | orchestrator | Tuesday 03 February 2026 02:50:13 +0000 (0:00:00.304) 0:00:19.779 ****** 2026-02-03 02:50:19.428036 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:50:19.428040 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:50:19.428044 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:50:19.428049 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:50:19.428054 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:50:19.428059 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:50:19.428063 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:50:19.428068 | orchestrator | 2026-02-03 02:50:19.428073 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-03 02:50:19.428078 | orchestrator | Tuesday 03 February 2026 02:50:14 +0000 (0:00:01.482) 0:00:21.261 ****** 2026-02-03 02:50:19.428082 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.428087 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.428092 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.428097 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.428101 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:19.428106 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:19.428111 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:19.428116 | orchestrator | 2026-02-03 02:50:19.428120 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-03 02:50:19.428125 | orchestrator | Tuesday 03 February 2026 02:50:14 +0000 (0:00:00.240) 0:00:21.502 ****** 2026-02-03 02:50:19.428130 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.428135 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.428140 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.428144 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.428149 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:19.428154 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:19.428159 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:19.428163 | orchestrator | 2026-02-03 02:50:19.428168 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-03 02:50:19.428175 | orchestrator | Tuesday 03 February 2026 02:50:15 +0000 (0:00:00.245) 0:00:21.747 ****** 2026-02-03 02:50:19.428182 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.428188 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.428194 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.428203 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.428212 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:19.428219 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:19.428225 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:19.428232 | orchestrator | 2026-02-03 02:50:19.428238 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-03 02:50:19.428245 | orchestrator | Tuesday 03 February 2026 02:50:15 +0000 (0:00:00.261) 0:00:22.008 ****** 2026-02-03 02:50:19.428252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:50:19.428260 | orchestrator | 2026-02-03 02:50:19.428267 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-03 02:50:19.428274 | orchestrator | Tuesday 03 February 2026 02:50:15 +0000 (0:00:00.314) 0:00:22.323 ****** 2026-02-03 02:50:19.428281 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.428288 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.428300 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.428307 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:19.428314 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:19.428321 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.428328 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:19.428336 | orchestrator | 2026-02-03 02:50:19.428342 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-03 02:50:19.428346 | orchestrator | Tuesday 03 February 2026 02:50:16 +0000 (0:00:00.551) 0:00:22.874 ****** 2026-02-03 02:50:19.428351 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:50:19.428356 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:50:19.428361 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:50:19.428366 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:50:19.428371 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:50:19.428376 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:50:19.428381 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:50:19.428385 | orchestrator | 2026-02-03 02:50:19.428391 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-03 02:50:19.428396 | orchestrator | Tuesday 03 February 2026 02:50:16 +0000 (0:00:00.239) 0:00:23.114 ****** 2026-02-03 02:50:19.428401 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.428406 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.428410 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.428415 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.428420 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:50:19.428425 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:50:19.428430 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:50:19.428435 | orchestrator | 2026-02-03 02:50:19.428440 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-03 02:50:19.428444 | orchestrator | Tuesday 03 February 2026 02:50:17 +0000 (0:00:01.094) 0:00:24.208 ****** 2026-02-03 02:50:19.428448 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.428452 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.428456 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:50:19.428460 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:50:19.428464 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.428468 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:50:19.428478 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.428482 | orchestrator | 2026-02-03 02:50:19.428486 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-03 02:50:19.428490 | orchestrator | Tuesday 03 February 2026 02:50:18 +0000 (0:00:00.621) 0:00:24.830 ****** 2026-02-03 02:50:19.428494 | orchestrator | ok: [testbed-manager] 2026-02-03 02:50:19.428499 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:50:19.428503 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:50:19.428507 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:50:19.428516 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:51:01.397649 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:51:01.397758 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:51:01.397766 | orchestrator | 2026-02-03 02:51:01.397772 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-03 02:51:01.397778 | orchestrator | Tuesday 03 February 2026 02:50:19 +0000 (0:00:01.153) 0:00:25.984 ****** 2026-02-03 02:51:01.397783 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.397787 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.397791 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.397795 | orchestrator | changed: [testbed-manager] 2026-02-03 02:51:01.397799 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:51:01.397803 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:51:01.397807 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:51:01.397811 | orchestrator | 2026-02-03 02:51:01.397815 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-03 02:51:01.397819 | orchestrator | Tuesday 03 February 2026 02:50:35 +0000 (0:00:16.490) 0:00:42.474 ****** 2026-02-03 02:51:01.397823 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.397842 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.397846 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.397850 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.397853 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.397857 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.397861 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.397864 | orchestrator | 2026-02-03 02:51:01.397868 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-03 02:51:01.397872 | orchestrator | Tuesday 03 February 2026 02:50:36 +0000 (0:00:00.253) 0:00:42.728 ****** 2026-02-03 02:51:01.397876 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.397880 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.397884 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.397888 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.397891 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.397895 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.397899 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.397902 | orchestrator | 2026-02-03 02:51:01.397906 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-03 02:51:01.397910 | orchestrator | Tuesday 03 February 2026 02:50:36 +0000 (0:00:00.221) 0:00:42.949 ****** 2026-02-03 02:51:01.397914 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.397918 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.397921 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.397925 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.397929 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.397932 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.397936 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.397940 | orchestrator | 2026-02-03 02:51:01.397944 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-03 02:51:01.397948 | orchestrator | Tuesday 03 February 2026 02:50:36 +0000 (0:00:00.234) 0:00:43.184 ****** 2026-02-03 02:51:01.397954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:51:01.397959 | orchestrator | 2026-02-03 02:51:01.397963 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-03 02:51:01.397967 | orchestrator | Tuesday 03 February 2026 02:50:36 +0000 (0:00:00.295) 0:00:43.480 ****** 2026-02-03 02:51:01.397971 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.397974 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.397978 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.397982 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.397985 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.397989 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.397993 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.397997 | orchestrator | 2026-02-03 02:51:01.398000 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-03 02:51:01.398004 | orchestrator | Tuesday 03 February 2026 02:50:38 +0000 (0:00:01.737) 0:00:45.217 ****** 2026-02-03 02:51:01.398008 | orchestrator | changed: [testbed-manager] 2026-02-03 02:51:01.398012 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:51:01.398050 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:51:01.398054 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:51:01.398058 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:51:01.398061 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:51:01.398065 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:51:01.398069 | orchestrator | 2026-02-03 02:51:01.398073 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-03 02:51:01.398086 | orchestrator | Tuesday 03 February 2026 02:50:39 +0000 (0:00:01.101) 0:00:46.318 ****** 2026-02-03 02:51:01.398090 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.398094 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.398098 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.398106 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.398109 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.398113 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.398117 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.398120 | orchestrator | 2026-02-03 02:51:01.398124 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-03 02:51:01.398128 | orchestrator | Tuesday 03 February 2026 02:50:40 +0000 (0:00:00.807) 0:00:47.126 ****** 2026-02-03 02:51:01.398133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:51:01.398138 | orchestrator | 2026-02-03 02:51:01.398142 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-03 02:51:01.398147 | orchestrator | Tuesday 03 February 2026 02:50:40 +0000 (0:00:00.314) 0:00:47.440 ****** 2026-02-03 02:51:01.398151 | orchestrator | changed: [testbed-manager] 2026-02-03 02:51:01.398155 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:51:01.398158 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:51:01.398162 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:51:01.398166 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:51:01.398169 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:51:01.398173 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:51:01.398177 | orchestrator | 2026-02-03 02:51:01.398192 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-03 02:51:01.398196 | orchestrator | Tuesday 03 February 2026 02:50:41 +0000 (0:00:01.012) 0:00:48.453 ****** 2026-02-03 02:51:01.398200 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:51:01.398204 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:51:01.398216 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:51:01.398221 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:51:01.398225 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:51:01.398243 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:51:01.398248 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:51:01.398252 | orchestrator | 2026-02-03 02:51:01.398257 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-03 02:51:01.398261 | orchestrator | Tuesday 03 February 2026 02:50:42 +0000 (0:00:00.224) 0:00:48.678 ****** 2026-02-03 02:51:01.398266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:51:01.398272 | orchestrator | 2026-02-03 02:51:01.398279 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-03 02:51:01.398284 | orchestrator | Tuesday 03 February 2026 02:50:42 +0000 (0:00:00.323) 0:00:49.002 ****** 2026-02-03 02:51:01.398290 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.398295 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.398301 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.398308 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.398314 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.398320 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.398326 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.398331 | orchestrator | 2026-02-03 02:51:01.398337 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-03 02:51:01.398343 | orchestrator | Tuesday 03 February 2026 02:50:44 +0000 (0:00:01.691) 0:00:50.693 ****** 2026-02-03 02:51:01.398350 | orchestrator | changed: [testbed-manager] 2026-02-03 02:51:01.398358 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:51:01.398364 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:51:01.398369 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:51:01.398374 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:51:01.398378 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:51:01.398383 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:51:01.398391 | orchestrator | 2026-02-03 02:51:01.398395 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-03 02:51:01.398399 | orchestrator | Tuesday 03 February 2026 02:50:45 +0000 (0:00:01.156) 0:00:51.850 ****** 2026-02-03 02:51:01.398402 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:51:01.398406 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:51:01.398410 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:51:01.398414 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:51:01.398417 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:51:01.398421 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:51:01.398425 | orchestrator | changed: [testbed-manager] 2026-02-03 02:51:01.398428 | orchestrator | 2026-02-03 02:51:01.398432 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-03 02:51:01.398436 | orchestrator | Tuesday 03 February 2026 02:50:58 +0000 (0:00:13.379) 0:01:05.229 ****** 2026-02-03 02:51:01.398440 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.398443 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.398447 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.398451 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.398454 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.398458 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.398462 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.398466 | orchestrator | 2026-02-03 02:51:01.398469 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-03 02:51:01.398473 | orchestrator | Tuesday 03 February 2026 02:50:59 +0000 (0:00:01.014) 0:01:06.244 ****** 2026-02-03 02:51:01.398477 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.398481 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.398484 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.398488 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.398492 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.398495 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.398499 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.398503 | orchestrator | 2026-02-03 02:51:01.398507 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-03 02:51:01.398510 | orchestrator | Tuesday 03 February 2026 02:51:00 +0000 (0:00:00.953) 0:01:07.198 ****** 2026-02-03 02:51:01.398519 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.398522 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.398526 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.398530 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.398534 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.398537 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.398541 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.398545 | orchestrator | 2026-02-03 02:51:01.398549 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-03 02:51:01.398553 | orchestrator | Tuesday 03 February 2026 02:51:00 +0000 (0:00:00.208) 0:01:07.407 ****** 2026-02-03 02:51:01.398556 | orchestrator | ok: [testbed-manager] 2026-02-03 02:51:01.398560 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:51:01.398564 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:51:01.398567 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:51:01.398571 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:51:01.398575 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:51:01.398579 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:51:01.398582 | orchestrator | 2026-02-03 02:51:01.398586 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-03 02:51:01.398590 | orchestrator | Tuesday 03 February 2026 02:51:01 +0000 (0:00:00.232) 0:01:07.639 ****** 2026-02-03 02:51:01.398594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:51:01.398598 | orchestrator | 2026-02-03 02:51:01.398606 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-03 02:53:16.894519 | orchestrator | Tuesday 03 February 2026 02:51:01 +0000 (0:00:00.318) 0:01:07.958 ****** 2026-02-03 02:53:16.894662 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:16.894690 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:16.894711 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:16.894729 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:16.894748 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:16.894760 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:16.894771 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:16.894782 | orchestrator | 2026-02-03 02:53:16.894795 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-03 02:53:16.894807 | orchestrator | Tuesday 03 February 2026 02:51:03 +0000 (0:00:01.676) 0:01:09.634 ****** 2026-02-03 02:53:16.894818 | orchestrator | changed: [testbed-manager] 2026-02-03 02:53:16.894831 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:53:16.894842 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:53:16.894853 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:53:16.894864 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:53:16.894875 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:53:16.894886 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:53:16.894896 | orchestrator | 2026-02-03 02:53:16.894908 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-03 02:53:16.894921 | orchestrator | Tuesday 03 February 2026 02:51:03 +0000 (0:00:00.590) 0:01:10.225 ****** 2026-02-03 02:53:16.894932 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:16.894943 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:16.894954 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:16.894964 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:16.895056 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:16.895069 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:16.895081 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:16.895094 | orchestrator | 2026-02-03 02:53:16.895108 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-03 02:53:16.895122 | orchestrator | Tuesday 03 February 2026 02:51:03 +0000 (0:00:00.251) 0:01:10.477 ****** 2026-02-03 02:53:16.895135 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:16.895147 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:16.895160 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:16.895173 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:16.895186 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:16.895198 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:16.895209 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:16.895219 | orchestrator | 2026-02-03 02:53:16.895230 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-03 02:53:16.895241 | orchestrator | Tuesday 03 February 2026 02:51:05 +0000 (0:00:01.175) 0:01:11.653 ****** 2026-02-03 02:53:16.895252 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:53:16.895263 | orchestrator | changed: [testbed-manager] 2026-02-03 02:53:16.895274 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:53:16.895285 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:53:16.895296 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:53:16.895306 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:53:16.895317 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:53:16.895328 | orchestrator | 2026-02-03 02:53:16.895344 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-03 02:53:16.895355 | orchestrator | Tuesday 03 February 2026 02:51:06 +0000 (0:00:01.749) 0:01:13.402 ****** 2026-02-03 02:53:16.895366 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:16.895377 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:16.895388 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:16.895399 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:16.895410 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:16.895421 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:16.895432 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:16.895443 | orchestrator | 2026-02-03 02:53:16.895454 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-03 02:53:16.895493 | orchestrator | Tuesday 03 February 2026 02:51:09 +0000 (0:00:02.510) 0:01:15.913 ****** 2026-02-03 02:53:16.895505 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:16.895516 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:16.895527 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:16.895538 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:16.895548 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:16.895559 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:16.895570 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:16.895580 | orchestrator | 2026-02-03 02:53:16.895591 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-03 02:53:16.895602 | orchestrator | Tuesday 03 February 2026 02:51:43 +0000 (0:00:34.214) 0:01:50.128 ****** 2026-02-03 02:53:16.895613 | orchestrator | changed: [testbed-manager] 2026-02-03 02:53:16.895624 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:53:16.895635 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:53:16.895646 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:53:16.895657 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:53:16.895668 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:53:16.895678 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:53:16.895689 | orchestrator | 2026-02-03 02:53:16.895700 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-03 02:53:16.895711 | orchestrator | Tuesday 03 February 2026 02:53:01 +0000 (0:01:17.844) 0:03:07.972 ****** 2026-02-03 02:53:16.895722 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:16.895733 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:16.895744 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:16.895755 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:16.895766 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:16.895776 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:16.895787 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:16.895798 | orchestrator | 2026-02-03 02:53:16.895808 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-03 02:53:16.895820 | orchestrator | Tuesday 03 February 2026 02:53:03 +0000 (0:00:02.009) 0:03:09.981 ****** 2026-02-03 02:53:16.895830 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:16.895841 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:16.895852 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:16.895863 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:16.895873 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:16.895884 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:16.895894 | orchestrator | changed: [testbed-manager] 2026-02-03 02:53:16.895905 | orchestrator | 2026-02-03 02:53:16.895916 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-03 02:53:16.895927 | orchestrator | Tuesday 03 February 2026 02:53:15 +0000 (0:00:12.212) 0:03:22.193 ****** 2026-02-03 02:53:16.896031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-03 02:53:16.896068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-03 02:53:16.896093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-03 02:53:16.896105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-03 02:53:16.896117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-03 02:53:16.896128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-03 02:53:16.896139 | orchestrator | 2026-02-03 02:53:16.896151 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-03 02:53:16.896162 | orchestrator | Tuesday 03 February 2026 02:53:16 +0000 (0:00:00.430) 0:03:22.624 ****** 2026-02-03 02:53:16.896173 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-03 02:53:16.896184 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:53:16.896195 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-03 02:53:16.896206 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-03 02:53:16.896217 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:53:16.896233 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-03 02:53:16.896245 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:53:16.896256 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:53:16.896267 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-03 02:53:16.896278 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-03 02:53:16.896288 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-03 02:53:16.896299 | orchestrator | 2026-02-03 02:53:16.896310 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-03 02:53:16.896321 | orchestrator | Tuesday 03 February 2026 02:53:16 +0000 (0:00:00.718) 0:03:23.343 ****** 2026-02-03 02:53:16.896332 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-03 02:53:16.896344 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-03 02:53:16.896356 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-03 02:53:16.896367 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-03 02:53:16.896378 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-03 02:53:16.896397 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-03 02:53:24.506813 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-03 02:53:24.506919 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-03 02:53:24.506955 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-03 02:53:24.506964 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-03 02:53:24.506972 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-03 02:53:24.506978 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-03 02:53:24.507027 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-03 02:53:24.507036 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-03 02:53:24.507042 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-03 02:53:24.507049 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-03 02:53:24.507057 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-03 02:53:24.507063 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-03 02:53:24.507070 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-03 02:53:24.507077 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:53:24.507085 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-03 02:53:24.507092 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-03 02:53:24.507099 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:53:24.507105 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-03 02:53:24.507112 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-03 02:53:24.507119 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-03 02:53:24.507125 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-03 02:53:24.507132 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-03 02:53:24.507138 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-03 02:53:24.507145 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-03 02:53:24.507151 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-03 02:53:24.507158 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-03 02:53:24.507164 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-03 02:53:24.507171 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:53:24.507178 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-03 02:53:24.507184 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-03 02:53:24.507191 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-03 02:53:24.507210 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-03 02:53:24.507217 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-03 02:53:24.507224 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-03 02:53:24.507230 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-03 02:53:24.507237 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-03 02:53:24.507250 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-03 02:53:24.507257 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:53:24.507264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-03 02:53:24.507270 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-03 02:53:24.507277 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-03 02:53:24.507283 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-03 02:53:24.507290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-03 02:53:24.507311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-03 02:53:24.507318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-03 02:53:24.507325 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-03 02:53:24.507331 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-03 02:53:24.507338 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-03 02:53:24.507345 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-03 02:53:24.507351 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-03 02:53:24.507360 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-03 02:53:24.507367 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-03 02:53:24.507375 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-03 02:53:24.507383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-03 02:53:24.507391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-03 02:53:24.507398 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-03 02:53:24.507406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-03 02:53:24.507413 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-03 02:53:24.507421 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-03 02:53:24.507429 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-03 02:53:24.507436 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-03 02:53:24.507444 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-03 02:53:24.507451 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-03 02:53:24.507459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-03 02:53:24.507467 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-03 02:53:24.507475 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-03 02:53:24.507483 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-03 02:53:24.507491 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-03 02:53:24.507504 | orchestrator | 2026-02-03 02:53:24.507513 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-03 02:53:24.507521 | orchestrator | Tuesday 03 February 2026 02:53:21 +0000 (0:00:04.835) 0:03:28.178 ****** 2026-02-03 02:53:24.507529 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-03 02:53:24.507536 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-03 02:53:24.507544 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-03 02:53:24.507552 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-03 02:53:24.507563 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-03 02:53:24.507571 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-03 02:53:24.507579 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-03 02:53:24.507586 | orchestrator | 2026-02-03 02:53:24.507594 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-03 02:53:24.507602 | orchestrator | Tuesday 03 February 2026 02:53:23 +0000 (0:00:01.426) 0:03:29.605 ****** 2026-02-03 02:53:24.507609 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-03 02:53:24.507617 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:53:24.507625 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-03 02:53:24.507632 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:53:24.507644 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-03 02:53:24.507656 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:53:24.507668 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-03 02:53:24.507679 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:53:24.507691 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-03 02:53:24.507701 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-03 02:53:24.507720 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-03 02:53:38.092905 | orchestrator | 2026-02-03 02:53:38.093130 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-03 02:53:38.093154 | orchestrator | Tuesday 03 February 2026 02:53:24 +0000 (0:00:01.463) 0:03:31.068 ****** 2026-02-03 02:53:38.093167 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-03 02:53:38.093179 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-03 02:53:38.093191 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:53:38.093204 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-03 02:53:38.093215 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:53:38.093226 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-03 02:53:38.093237 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:53:38.093248 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:53:38.093259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-03 02:53:38.093270 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-03 02:53:38.093281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-03 02:53:38.093292 | orchestrator | 2026-02-03 02:53:38.093303 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-03 02:53:38.093339 | orchestrator | Tuesday 03 February 2026 02:53:25 +0000 (0:00:00.628) 0:03:31.697 ****** 2026-02-03 02:53:38.093351 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-03 02:53:38.093362 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:53:38.093373 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-03 02:53:38.093384 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-03 02:53:38.093395 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:53:38.093408 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:53:38.093421 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-03 02:53:38.093434 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:53:38.093447 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-03 02:53:38.093461 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-03 02:53:38.093475 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-03 02:53:38.093488 | orchestrator | 2026-02-03 02:53:38.093501 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-03 02:53:38.093514 | orchestrator | Tuesday 03 February 2026 02:53:25 +0000 (0:00:00.633) 0:03:32.330 ****** 2026-02-03 02:53:38.093527 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:53:38.093539 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:53:38.093552 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:53:38.093565 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:53:38.093578 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:53:38.093590 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:53:38.093604 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:53:38.093616 | orchestrator | 2026-02-03 02:53:38.093629 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-03 02:53:38.093642 | orchestrator | Tuesday 03 February 2026 02:53:26 +0000 (0:00:00.322) 0:03:32.652 ****** 2026-02-03 02:53:38.093655 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:38.093669 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:38.093682 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:38.093695 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:38.093707 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:38.093720 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:38.093732 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:38.093745 | orchestrator | 2026-02-03 02:53:38.093758 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-03 02:53:38.093771 | orchestrator | Tuesday 03 February 2026 02:53:31 +0000 (0:00:05.898) 0:03:38.550 ****** 2026-02-03 02:53:38.093782 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-03 02:53:38.093793 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-03 02:53:38.093804 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:53:38.093815 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-03 02:53:38.093826 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:53:38.093836 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-03 02:53:38.093847 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:53:38.093858 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-03 02:53:38.093870 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:53:38.093880 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:53:38.093909 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-03 02:53:38.093920 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:53:38.093931 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-03 02:53:38.093942 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:53:38.093961 | orchestrator | 2026-02-03 02:53:38.093972 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-03 02:53:38.093983 | orchestrator | Tuesday 03 February 2026 02:53:32 +0000 (0:00:00.294) 0:03:38.845 ****** 2026-02-03 02:53:38.093994 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-03 02:53:38.094005 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-03 02:53:38.094127 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-03 02:53:38.094174 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-03 02:53:38.094195 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-03 02:53:38.094213 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-03 02:53:38.094231 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-03 02:53:38.094243 | orchestrator | 2026-02-03 02:53:38.094254 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-03 02:53:38.094265 | orchestrator | Tuesday 03 February 2026 02:53:33 +0000 (0:00:01.057) 0:03:39.902 ****** 2026-02-03 02:53:38.094278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:53:38.094291 | orchestrator | 2026-02-03 02:53:38.094302 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-03 02:53:38.094313 | orchestrator | Tuesday 03 February 2026 02:53:33 +0000 (0:00:00.533) 0:03:40.436 ****** 2026-02-03 02:53:38.094324 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:38.094335 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:38.094346 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:38.094357 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:38.094368 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:38.094379 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:38.094390 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:38.094401 | orchestrator | 2026-02-03 02:53:38.094412 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-03 02:53:38.094423 | orchestrator | Tuesday 03 February 2026 02:53:35 +0000 (0:00:01.210) 0:03:41.647 ****** 2026-02-03 02:53:38.094434 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:38.094445 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:38.094456 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:38.094467 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:38.094478 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:38.094488 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:38.094499 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:38.094510 | orchestrator | 2026-02-03 02:53:38.094521 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-03 02:53:38.094532 | orchestrator | Tuesday 03 February 2026 02:53:35 +0000 (0:00:00.627) 0:03:42.275 ****** 2026-02-03 02:53:38.094543 | orchestrator | changed: [testbed-manager] 2026-02-03 02:53:38.094555 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:53:38.094566 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:53:38.094577 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:53:38.094588 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:53:38.094599 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:53:38.094610 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:53:38.094621 | orchestrator | 2026-02-03 02:53:38.094632 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-03 02:53:38.094643 | orchestrator | Tuesday 03 February 2026 02:53:36 +0000 (0:00:00.653) 0:03:42.928 ****** 2026-02-03 02:53:38.094654 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:38.094665 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:38.094676 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:38.094687 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:38.094698 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:38.094709 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:38.094719 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:38.094730 | orchestrator | 2026-02-03 02:53:38.094741 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-03 02:53:38.094764 | orchestrator | Tuesday 03 February 2026 02:53:36 +0000 (0:00:00.643) 0:03:43.572 ****** 2026-02-03 02:53:38.094801 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770085594.6544, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:38.094831 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770085620.822286, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:38.094849 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770085605.5413203, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:38.094898 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770085626.8691132, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873471 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770085617.8022087, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873544 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770085613.5304956, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873553 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770085626.8611352, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873579 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873596 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873602 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873608 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873630 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873636 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873641 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 02:53:42.873652 | orchestrator | 2026-02-03 02:53:42.873658 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-03 02:53:42.873665 | orchestrator | Tuesday 03 February 2026 02:53:38 +0000 (0:00:01.077) 0:03:44.650 ****** 2026-02-03 02:53:42.873671 | orchestrator | changed: [testbed-manager] 2026-02-03 02:53:42.873677 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:53:42.873683 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:53:42.873688 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:53:42.873693 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:53:42.873698 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:53:42.873703 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:53:42.873708 | orchestrator | 2026-02-03 02:53:42.873714 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-03 02:53:42.873719 | orchestrator | Tuesday 03 February 2026 02:53:39 +0000 (0:00:01.128) 0:03:45.778 ****** 2026-02-03 02:53:42.873724 | orchestrator | changed: [testbed-manager] 2026-02-03 02:53:42.873729 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:53:42.873734 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:53:42.873740 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:53:42.873745 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:53:42.873750 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:53:42.873755 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:53:42.873760 | orchestrator | 2026-02-03 02:53:42.873768 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-03 02:53:42.873773 | orchestrator | Tuesday 03 February 2026 02:53:40 +0000 (0:00:01.130) 0:03:46.909 ****** 2026-02-03 02:53:42.873778 | orchestrator | changed: [testbed-manager] 2026-02-03 02:53:42.873783 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:53:42.873788 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:53:42.873793 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:53:42.873798 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:53:42.873803 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:53:42.873808 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:53:42.873813 | orchestrator | 2026-02-03 02:53:42.873818 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-03 02:53:42.873824 | orchestrator | Tuesday 03 February 2026 02:53:41 +0000 (0:00:01.114) 0:03:48.024 ****** 2026-02-03 02:53:42.873829 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:53:42.873834 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:53:42.873839 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:53:42.873844 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:53:42.873849 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:53:42.873853 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:53:42.873859 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:53:42.873864 | orchestrator | 2026-02-03 02:53:42.873869 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-03 02:53:42.873874 | orchestrator | Tuesday 03 February 2026 02:53:41 +0000 (0:00:00.285) 0:03:48.309 ****** 2026-02-03 02:53:42.873879 | orchestrator | ok: [testbed-manager] 2026-02-03 02:53:42.873885 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:53:42.873890 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:53:42.873895 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:53:42.873900 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:53:42.873905 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:53:42.873910 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:53:42.873915 | orchestrator | 2026-02-03 02:53:42.873920 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-03 02:53:42.873925 | orchestrator | Tuesday 03 February 2026 02:53:42 +0000 (0:00:00.715) 0:03:49.025 ****** 2026-02-03 02:53:42.873931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:53:42.873941 | orchestrator | 2026-02-03 02:53:42.873946 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-03 02:53:42.873955 | orchestrator | Tuesday 03 February 2026 02:53:42 +0000 (0:00:00.409) 0:03:49.434 ****** 2026-02-03 02:55:02.779850 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:02.779982 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:55:02.780006 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:55:02.780020 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:55:02.780054 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:55:02.780082 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:55:02.780096 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:55:02.780111 | orchestrator | 2026-02-03 02:55:02.780127 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-03 02:55:02.780236 | orchestrator | Tuesday 03 February 2026 02:53:51 +0000 (0:00:08.254) 0:03:57.688 ****** 2026-02-03 02:55:02.780254 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:02.780269 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:02.780282 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:02.780296 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:02.780311 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:02.780325 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:02.780339 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:02.780353 | orchestrator | 2026-02-03 02:55:02.780369 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-03 02:55:02.780385 | orchestrator | Tuesday 03 February 2026 02:53:52 +0000 (0:00:01.310) 0:03:58.999 ****** 2026-02-03 02:55:02.780401 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:02.780417 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:02.780432 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:02.780446 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:02.780457 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:02.780467 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:02.780478 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:02.780494 | orchestrator | 2026-02-03 02:55:02.780505 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-03 02:55:02.780516 | orchestrator | Tuesday 03 February 2026 02:53:53 +0000 (0:00:01.167) 0:04:00.167 ****** 2026-02-03 02:55:02.780526 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:02.780538 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:02.780552 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:02.780567 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:02.780577 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:02.780587 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:02.780597 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:02.780607 | orchestrator | 2026-02-03 02:55:02.780618 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-03 02:55:02.780629 | orchestrator | Tuesday 03 February 2026 02:53:53 +0000 (0:00:00.302) 0:04:00.469 ****** 2026-02-03 02:55:02.780640 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:02.780650 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:02.780660 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:02.780670 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:02.780681 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:02.780691 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:02.780700 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:02.780709 | orchestrator | 2026-02-03 02:55:02.780718 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-03 02:55:02.780727 | orchestrator | Tuesday 03 February 2026 02:53:54 +0000 (0:00:00.328) 0:04:00.797 ****** 2026-02-03 02:55:02.780735 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:02.780744 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:02.780753 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:02.780786 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:02.780796 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:02.780805 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:02.780813 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:02.780822 | orchestrator | 2026-02-03 02:55:02.780831 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-03 02:55:02.780840 | orchestrator | Tuesday 03 February 2026 02:53:54 +0000 (0:00:00.328) 0:04:01.126 ****** 2026-02-03 02:55:02.780849 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:02.780858 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:02.780867 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:02.780875 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:02.780884 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:02.780893 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:02.780901 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:02.780910 | orchestrator | 2026-02-03 02:55:02.780918 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-03 02:55:02.780927 | orchestrator | Tuesday 03 February 2026 02:54:00 +0000 (0:00:05.644) 0:04:06.771 ****** 2026-02-03 02:55:02.780938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:55:02.780950 | orchestrator | 2026-02-03 02:55:02.780959 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-03 02:55:02.780968 | orchestrator | Tuesday 03 February 2026 02:54:00 +0000 (0:00:00.397) 0:04:07.169 ****** 2026-02-03 02:55:02.780976 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-03 02:55:02.780985 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-03 02:55:02.780995 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-03 02:55:02.781003 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-03 02:55:02.781012 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:55:02.781021 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:55:02.781048 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-03 02:55:02.781057 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-03 02:55:02.781071 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-03 02:55:02.781083 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-03 02:55:02.781092 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:55:02.781100 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-03 02:55:02.781109 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:55:02.781118 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-03 02:55:02.781127 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-03 02:55:02.781136 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-03 02:55:02.781200 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:55:02.781216 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:55:02.781231 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-03 02:55:02.781241 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-03 02:55:02.781249 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:55:02.781262 | orchestrator | 2026-02-03 02:55:02.781277 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-03 02:55:02.781286 | orchestrator | Tuesday 03 February 2026 02:54:00 +0000 (0:00:00.365) 0:04:07.534 ****** 2026-02-03 02:55:02.781295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:55:02.781304 | orchestrator | 2026-02-03 02:55:02.781318 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-03 02:55:02.781345 | orchestrator | Tuesday 03 February 2026 02:54:01 +0000 (0:00:00.408) 0:04:07.943 ****** 2026-02-03 02:55:02.781359 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-03 02:55:02.781374 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-03 02:55:02.781390 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:55:02.781404 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-03 02:55:02.781418 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:55:02.781427 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-03 02:55:02.781435 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:55:02.781444 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:55:02.781453 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-03 02:55:02.781461 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-03 02:55:02.781470 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:55:02.781478 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:55:02.781487 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-03 02:55:02.781496 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:55:02.781504 | orchestrator | 2026-02-03 02:55:02.781513 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-03 02:55:02.781521 | orchestrator | Tuesday 03 February 2026 02:54:01 +0000 (0:00:00.307) 0:04:08.250 ****** 2026-02-03 02:55:02.781531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:55:02.781539 | orchestrator | 2026-02-03 02:55:02.781548 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-03 02:55:02.781557 | orchestrator | Tuesday 03 February 2026 02:54:02 +0000 (0:00:00.402) 0:04:08.653 ****** 2026-02-03 02:55:02.781565 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:55:02.781574 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:55:02.781583 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:55:02.781591 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:55:02.781606 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:55:02.781615 | orchestrator | changed: [testbed-manager] 2026-02-03 02:55:02.781623 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:55:02.781632 | orchestrator | 2026-02-03 02:55:02.781641 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-03 02:55:02.781650 | orchestrator | Tuesday 03 February 2026 02:54:37 +0000 (0:00:35.906) 0:04:44.560 ****** 2026-02-03 02:55:02.781658 | orchestrator | changed: [testbed-manager] 2026-02-03 02:55:02.781667 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:55:02.781675 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:55:02.781684 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:55:02.781692 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:55:02.781701 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:55:02.781709 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:55:02.781718 | orchestrator | 2026-02-03 02:55:02.781727 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-03 02:55:02.781735 | orchestrator | Tuesday 03 February 2026 02:54:46 +0000 (0:00:08.125) 0:04:52.685 ****** 2026-02-03 02:55:02.781744 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:55:02.781753 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:55:02.781761 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:55:02.781770 | orchestrator | changed: [testbed-manager] 2026-02-03 02:55:02.781778 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:55:02.781787 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:55:02.781795 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:55:02.781804 | orchestrator | 2026-02-03 02:55:02.781813 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-03 02:55:02.781828 | orchestrator | Tuesday 03 February 2026 02:54:54 +0000 (0:00:08.395) 0:05:01.081 ****** 2026-02-03 02:55:02.781837 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:02.781845 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:02.781854 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:02.781863 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:02.781871 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:02.781880 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:02.781889 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:02.781897 | orchestrator | 2026-02-03 02:55:02.781906 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-03 02:55:02.781915 | orchestrator | Tuesday 03 February 2026 02:54:56 +0000 (0:00:02.086) 0:05:03.168 ****** 2026-02-03 02:55:02.781924 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:55:02.781933 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:55:02.781941 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:55:02.781950 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:55:02.781959 | orchestrator | changed: [testbed-manager] 2026-02-03 02:55:02.781967 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:55:02.781976 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:55:02.781985 | orchestrator | 2026-02-03 02:55:02.782001 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-03 02:55:14.317873 | orchestrator | Tuesday 03 February 2026 02:55:02 +0000 (0:00:06.169) 0:05:09.338 ****** 2026-02-03 02:55:14.317974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:55:14.317985 | orchestrator | 2026-02-03 02:55:14.317993 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-03 02:55:14.318000 | orchestrator | Tuesday 03 February 2026 02:55:03 +0000 (0:00:00.564) 0:05:09.902 ****** 2026-02-03 02:55:14.318006 | orchestrator | changed: [testbed-manager] 2026-02-03 02:55:14.318060 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:55:14.318068 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:55:14.318074 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:55:14.318080 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:55:14.318085 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:55:14.318092 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:55:14.318097 | orchestrator | 2026-02-03 02:55:14.318104 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-03 02:55:14.318110 | orchestrator | Tuesday 03 February 2026 02:55:04 +0000 (0:00:00.732) 0:05:10.635 ****** 2026-02-03 02:55:14.318116 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:14.318123 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:14.318129 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:14.318135 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:14.318140 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:14.318146 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:14.318152 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:14.318177 | orchestrator | 2026-02-03 02:55:14.318183 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-03 02:55:14.318190 | orchestrator | Tuesday 03 February 2026 02:55:05 +0000 (0:00:01.689) 0:05:12.324 ****** 2026-02-03 02:55:14.318196 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:55:14.318202 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:55:14.318207 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:55:14.318213 | orchestrator | changed: [testbed-manager] 2026-02-03 02:55:14.318219 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:55:14.318226 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:55:14.318231 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:55:14.318237 | orchestrator | 2026-02-03 02:55:14.318243 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-03 02:55:14.318249 | orchestrator | Tuesday 03 February 2026 02:55:06 +0000 (0:00:00.921) 0:05:13.246 ****** 2026-02-03 02:55:14.318276 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:55:14.318282 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:55:14.318288 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:55:14.318294 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:55:14.318300 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:55:14.318305 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:55:14.318311 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:55:14.318317 | orchestrator | 2026-02-03 02:55:14.318323 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-03 02:55:14.318328 | orchestrator | Tuesday 03 February 2026 02:55:06 +0000 (0:00:00.317) 0:05:13.563 ****** 2026-02-03 02:55:14.318334 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:55:14.318339 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:55:14.318345 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:55:14.318364 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:55:14.318369 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:55:14.318375 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:55:14.318380 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:55:14.318386 | orchestrator | 2026-02-03 02:55:14.318392 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-03 02:55:14.318397 | orchestrator | Tuesday 03 February 2026 02:55:07 +0000 (0:00:00.390) 0:05:13.954 ****** 2026-02-03 02:55:14.318421 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:14.318428 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:14.318441 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:14.318447 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:14.318453 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:14.318459 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:14.318465 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:14.318471 | orchestrator | 2026-02-03 02:55:14.318477 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-03 02:55:14.318483 | orchestrator | Tuesday 03 February 2026 02:55:07 +0000 (0:00:00.310) 0:05:14.264 ****** 2026-02-03 02:55:14.318489 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:55:14.318495 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:55:14.318501 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:55:14.318507 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:55:14.318513 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:55:14.318518 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:55:14.318524 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:55:14.318530 | orchestrator | 2026-02-03 02:55:14.318537 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-03 02:55:14.318544 | orchestrator | Tuesday 03 February 2026 02:55:07 +0000 (0:00:00.274) 0:05:14.539 ****** 2026-02-03 02:55:14.318550 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:14.318556 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:14.318561 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:14.318567 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:14.318572 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:14.318578 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:14.318583 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:14.318589 | orchestrator | 2026-02-03 02:55:14.318594 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-03 02:55:14.318600 | orchestrator | Tuesday 03 February 2026 02:55:08 +0000 (0:00:00.321) 0:05:14.860 ****** 2026-02-03 02:55:14.318606 | orchestrator | ok: [testbed-manager] =>  2026-02-03 02:55:14.318611 | orchestrator |  docker_version: 5:27.5.1 2026-02-03 02:55:14.318617 | orchestrator | ok: [testbed-node-3] =>  2026-02-03 02:55:14.318622 | orchestrator |  docker_version: 5:27.5.1 2026-02-03 02:55:14.318627 | orchestrator | ok: [testbed-node-4] =>  2026-02-03 02:55:14.318633 | orchestrator |  docker_version: 5:27.5.1 2026-02-03 02:55:14.318639 | orchestrator | ok: [testbed-node-5] =>  2026-02-03 02:55:14.318644 | orchestrator |  docker_version: 5:27.5.1 2026-02-03 02:55:14.318667 | orchestrator | ok: [testbed-node-0] =>  2026-02-03 02:55:14.318678 | orchestrator |  docker_version: 5:27.5.1 2026-02-03 02:55:14.318683 | orchestrator | ok: [testbed-node-1] =>  2026-02-03 02:55:14.318689 | orchestrator |  docker_version: 5:27.5.1 2026-02-03 02:55:14.318695 | orchestrator | ok: [testbed-node-2] =>  2026-02-03 02:55:14.318701 | orchestrator |  docker_version: 5:27.5.1 2026-02-03 02:55:14.318707 | orchestrator | 2026-02-03 02:55:14.318713 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-03 02:55:14.318718 | orchestrator | Tuesday 03 February 2026 02:55:08 +0000 (0:00:00.280) 0:05:15.141 ****** 2026-02-03 02:55:14.318724 | orchestrator | ok: [testbed-manager] =>  2026-02-03 02:55:14.318729 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-03 02:55:14.318735 | orchestrator | ok: [testbed-node-3] =>  2026-02-03 02:55:14.318740 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-03 02:55:14.318746 | orchestrator | ok: [testbed-node-4] =>  2026-02-03 02:55:14.318751 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-03 02:55:14.318757 | orchestrator | ok: [testbed-node-5] =>  2026-02-03 02:55:14.318763 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-03 02:55:14.318768 | orchestrator | ok: [testbed-node-0] =>  2026-02-03 02:55:14.318774 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-03 02:55:14.318779 | orchestrator | ok: [testbed-node-1] =>  2026-02-03 02:55:14.318784 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-03 02:55:14.318790 | orchestrator | ok: [testbed-node-2] =>  2026-02-03 02:55:14.318796 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-03 02:55:14.318802 | orchestrator | 2026-02-03 02:55:14.318808 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-03 02:55:14.318813 | orchestrator | Tuesday 03 February 2026 02:55:08 +0000 (0:00:00.294) 0:05:15.435 ****** 2026-02-03 02:55:14.318819 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:55:14.318824 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:55:14.318830 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:55:14.318835 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:55:14.318841 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:55:14.318847 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:55:14.318852 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:55:14.318858 | orchestrator | 2026-02-03 02:55:14.318863 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-03 02:55:14.318869 | orchestrator | Tuesday 03 February 2026 02:55:09 +0000 (0:00:00.269) 0:05:15.705 ****** 2026-02-03 02:55:14.318874 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:55:14.318880 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:55:14.318885 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:55:14.318891 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:55:14.318896 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:55:14.318902 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:55:14.318907 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:55:14.318913 | orchestrator | 2026-02-03 02:55:14.318918 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-03 02:55:14.318924 | orchestrator | Tuesday 03 February 2026 02:55:09 +0000 (0:00:00.274) 0:05:15.979 ****** 2026-02-03 02:55:14.318932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:55:14.318940 | orchestrator | 2026-02-03 02:55:14.318951 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-03 02:55:14.318957 | orchestrator | Tuesday 03 February 2026 02:55:09 +0000 (0:00:00.406) 0:05:16.385 ****** 2026-02-03 02:55:14.318963 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:14.318969 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:14.318974 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:14.318980 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:14.318985 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:14.318999 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:14.319005 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:14.319010 | orchestrator | 2026-02-03 02:55:14.319016 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-03 02:55:14.319021 | orchestrator | Tuesday 03 February 2026 02:55:10 +0000 (0:00:01.040) 0:05:17.426 ****** 2026-02-03 02:55:14.319027 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:55:14.319033 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:55:14.319038 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:55:14.319043 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:55:14.319049 | orchestrator | ok: [testbed-manager] 2026-02-03 02:55:14.319054 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:55:14.319060 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:55:14.319065 | orchestrator | 2026-02-03 02:55:14.319071 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-03 02:55:14.319078 | orchestrator | Tuesday 03 February 2026 02:55:13 +0000 (0:00:03.040) 0:05:20.467 ****** 2026-02-03 02:55:14.319084 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-03 02:55:14.319090 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-03 02:55:14.319096 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-03 02:55:14.319102 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:55:14.319108 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-03 02:55:14.319114 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-03 02:55:14.319120 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-03 02:55:14.319127 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:55:14.319133 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-03 02:55:14.319139 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-03 02:55:14.319144 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-03 02:55:14.319150 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:55:14.319230 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-03 02:55:14.319240 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-03 02:55:14.319246 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-03 02:55:14.319251 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-03 02:55:14.319265 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-03 02:56:17.460460 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-03 02:56:17.460553 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:56:17.460564 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-03 02:56:17.460571 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-03 02:56:17.460577 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-03 02:56:17.460583 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:56:17.460589 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:56:17.460595 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-03 02:56:17.460601 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-03 02:56:17.460607 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-03 02:56:17.460613 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:56:17.460619 | orchestrator | 2026-02-03 02:56:17.460626 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-03 02:56:17.460633 | orchestrator | Tuesday 03 February 2026 02:55:14 +0000 (0:00:00.676) 0:05:21.143 ****** 2026-02-03 02:56:17.460639 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:17.460645 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.460651 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.460656 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.460663 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.460669 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.460691 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.460697 | orchestrator | 2026-02-03 02:56:17.460703 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-03 02:56:17.460709 | orchestrator | Tuesday 03 February 2026 02:55:21 +0000 (0:00:07.403) 0:05:28.547 ****** 2026-02-03 02:56:17.460715 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.460721 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.460726 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:17.460732 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.460738 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.460743 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.460749 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.460755 | orchestrator | 2026-02-03 02:56:17.460760 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-03 02:56:17.460766 | orchestrator | Tuesday 03 February 2026 02:55:23 +0000 (0:00:01.096) 0:05:29.644 ****** 2026-02-03 02:56:17.460772 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:17.460778 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.460783 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.460789 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.460794 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.460800 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.460806 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.460811 | orchestrator | 2026-02-03 02:56:17.460817 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-03 02:56:17.460823 | orchestrator | Tuesday 03 February 2026 02:55:30 +0000 (0:00:07.181) 0:05:36.825 ****** 2026-02-03 02:56:17.460829 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.460835 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.460840 | orchestrator | changed: [testbed-manager] 2026-02-03 02:56:17.460846 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.460851 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.460857 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.460863 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.460868 | orchestrator | 2026-02-03 02:56:17.460874 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-03 02:56:17.460880 | orchestrator | Tuesday 03 February 2026 02:55:33 +0000 (0:00:03.007) 0:05:39.832 ****** 2026-02-03 02:56:17.460886 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:17.460892 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.460897 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.460903 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.460909 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.460914 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.460920 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.460926 | orchestrator | 2026-02-03 02:56:17.460931 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-03 02:56:17.460937 | orchestrator | Tuesday 03 February 2026 02:55:34 +0000 (0:00:01.186) 0:05:41.018 ****** 2026-02-03 02:56:17.460943 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:17.460949 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.460954 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.460960 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.460965 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.460971 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.460977 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.460983 | orchestrator | 2026-02-03 02:56:17.460988 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-03 02:56:17.460995 | orchestrator | Tuesday 03 February 2026 02:55:35 +0000 (0:00:01.361) 0:05:42.380 ****** 2026-02-03 02:56:17.461002 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:56:17.461009 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:56:17.461016 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:56:17.461022 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:56:17.461034 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:56:17.461041 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:56:17.461048 | orchestrator | changed: [testbed-manager] 2026-02-03 02:56:17.461054 | orchestrator | 2026-02-03 02:56:17.461061 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-03 02:56:17.461068 | orchestrator | Tuesday 03 February 2026 02:55:36 +0000 (0:00:00.640) 0:05:43.021 ****** 2026-02-03 02:56:17.461074 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:17.461081 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.461087 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.461094 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.461101 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.461107 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.461114 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.461120 | orchestrator | 2026-02-03 02:56:17.461127 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-03 02:56:17.461146 | orchestrator | Tuesday 03 February 2026 02:55:46 +0000 (0:00:09.850) 0:05:52.871 ****** 2026-02-03 02:56:17.461154 | orchestrator | changed: [testbed-manager] 2026-02-03 02:56:17.461160 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.461167 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.461174 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.461180 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.461187 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.461193 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.461200 | orchestrator | 2026-02-03 02:56:17.461207 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-03 02:56:17.461213 | orchestrator | Tuesday 03 February 2026 02:55:47 +0000 (0:00:00.962) 0:05:53.833 ****** 2026-02-03 02:56:17.461220 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:17.461226 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.461233 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.461240 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.461269 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.461276 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.461282 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.461289 | orchestrator | 2026-02-03 02:56:17.461295 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-03 02:56:17.461302 | orchestrator | Tuesday 03 February 2026 02:55:57 +0000 (0:00:10.394) 0:06:04.228 ****** 2026-02-03 02:56:17.461309 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:17.461316 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.461322 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.461329 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.461336 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.461343 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.461350 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.461356 | orchestrator | 2026-02-03 02:56:17.461363 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-03 02:56:17.461370 | orchestrator | Tuesday 03 February 2026 02:56:09 +0000 (0:00:12.268) 0:06:16.496 ****** 2026-02-03 02:56:17.461377 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-03 02:56:17.461384 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-03 02:56:17.461391 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-03 02:56:17.461397 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-03 02:56:17.461402 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-03 02:56:17.461408 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-03 02:56:17.461414 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-03 02:56:17.461419 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-03 02:56:17.461425 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-03 02:56:17.461435 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-03 02:56:17.461441 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-03 02:56:17.461483 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-03 02:56:17.461489 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-03 02:56:17.461495 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-03 02:56:17.461501 | orchestrator | 2026-02-03 02:56:17.461507 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-03 02:56:17.461513 | orchestrator | Tuesday 03 February 2026 02:56:11 +0000 (0:00:01.330) 0:06:17.827 ****** 2026-02-03 02:56:17.461521 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:56:17.461527 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:56:17.461533 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:56:17.461539 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:56:17.461544 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:56:17.461550 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:56:17.461556 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:56:17.461561 | orchestrator | 2026-02-03 02:56:17.461567 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-03 02:56:17.461573 | orchestrator | Tuesday 03 February 2026 02:56:11 +0000 (0:00:00.547) 0:06:18.375 ****** 2026-02-03 02:56:17.461579 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:17.461585 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:17.461590 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:17.461596 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:17.461602 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:17.461607 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:17.461613 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:17.461619 | orchestrator | 2026-02-03 02:56:17.461625 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-03 02:56:17.461631 | orchestrator | Tuesday 03 February 2026 02:56:16 +0000 (0:00:04.599) 0:06:22.975 ****** 2026-02-03 02:56:17.461636 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:56:17.461642 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:56:17.461648 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:56:17.461653 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:56:17.461659 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:56:17.461665 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:56:17.461670 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:56:17.461676 | orchestrator | 2026-02-03 02:56:17.461683 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-03 02:56:17.461689 | orchestrator | Tuesday 03 February 2026 02:56:16 +0000 (0:00:00.504) 0:06:23.480 ****** 2026-02-03 02:56:17.461695 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-03 02:56:17.461700 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-03 02:56:17.461706 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:56:17.461712 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-03 02:56:17.461718 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-03 02:56:17.461723 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:56:17.461729 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-03 02:56:17.461735 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-03 02:56:17.461741 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:56:17.461751 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-03 02:56:36.181596 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-03 02:56:36.181726 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:56:36.181752 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-03 02:56:36.181773 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-03 02:56:36.181786 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:56:36.181852 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-03 02:56:36.181865 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-03 02:56:36.181876 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:56:36.181886 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-03 02:56:36.181897 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-03 02:56:36.181908 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:56:36.181919 | orchestrator | 2026-02-03 02:56:36.181933 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-03 02:56:36.181945 | orchestrator | Tuesday 03 February 2026 02:56:17 +0000 (0:00:00.810) 0:06:24.290 ****** 2026-02-03 02:56:36.181956 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:56:36.181967 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:56:36.181978 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:56:36.181988 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:56:36.181999 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:56:36.182009 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:56:36.182085 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:56:36.182097 | orchestrator | 2026-02-03 02:56:36.182109 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-03 02:56:36.182120 | orchestrator | Tuesday 03 February 2026 02:56:18 +0000 (0:00:00.517) 0:06:24.808 ****** 2026-02-03 02:56:36.182166 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:56:36.182179 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:56:36.182192 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:56:36.182204 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:56:36.182216 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:56:36.182229 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:56:36.182241 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:56:36.182253 | orchestrator | 2026-02-03 02:56:36.182267 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-03 02:56:36.182309 | orchestrator | Tuesday 03 February 2026 02:56:18 +0000 (0:00:00.522) 0:06:25.331 ****** 2026-02-03 02:56:36.182322 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:56:36.182335 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:56:36.182348 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:56:36.182360 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:56:36.182372 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:56:36.182385 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:56:36.182397 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:56:36.182410 | orchestrator | 2026-02-03 02:56:36.182422 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-03 02:56:36.182435 | orchestrator | Tuesday 03 February 2026 02:56:19 +0000 (0:00:00.561) 0:06:25.893 ****** 2026-02-03 02:56:36.182447 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:36.182460 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:56:36.182473 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:56:36.182486 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:56:36.182498 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:56:36.182509 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:56:36.182520 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:56:36.182531 | orchestrator | 2026-02-03 02:56:36.182542 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-03 02:56:36.182553 | orchestrator | Tuesday 03 February 2026 02:56:21 +0000 (0:00:02.002) 0:06:27.895 ****** 2026-02-03 02:56:36.182565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:56:36.182586 | orchestrator | 2026-02-03 02:56:36.182605 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-03 02:56:36.182624 | orchestrator | Tuesday 03 February 2026 02:56:22 +0000 (0:00:00.933) 0:06:28.829 ****** 2026-02-03 02:56:36.182661 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:36.182673 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:36.182684 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:36.182694 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:36.182705 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:36.182716 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:36.182727 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:36.182738 | orchestrator | 2026-02-03 02:56:36.182748 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-03 02:56:36.182759 | orchestrator | Tuesday 03 February 2026 02:56:23 +0000 (0:00:00.837) 0:06:29.666 ****** 2026-02-03 02:56:36.182770 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:36.182781 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:36.182791 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:36.182802 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:36.182813 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:36.182823 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:36.182834 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:36.182845 | orchestrator | 2026-02-03 02:56:36.182856 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-03 02:56:36.182866 | orchestrator | Tuesday 03 February 2026 02:56:23 +0000 (0:00:00.858) 0:06:30.525 ****** 2026-02-03 02:56:36.182877 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:36.182888 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:36.182899 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:36.182909 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:36.182920 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:36.182930 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:36.182941 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:36.182952 | orchestrator | 2026-02-03 02:56:36.182963 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-03 02:56:36.182992 | orchestrator | Tuesday 03 February 2026 02:56:25 +0000 (0:00:01.329) 0:06:31.855 ****** 2026-02-03 02:56:36.183004 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:56:36.183015 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:56:36.183026 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:56:36.183037 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:56:36.183048 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:56:36.183059 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:56:36.183069 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:56:36.183080 | orchestrator | 2026-02-03 02:56:36.183091 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-03 02:56:36.183102 | orchestrator | Tuesday 03 February 2026 02:56:26 +0000 (0:00:01.248) 0:06:33.104 ****** 2026-02-03 02:56:36.183113 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:36.183138 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:36.183160 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:36.183171 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:36.183182 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:36.183193 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:36.183203 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:36.183214 | orchestrator | 2026-02-03 02:56:36.183225 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-03 02:56:36.183236 | orchestrator | Tuesday 03 February 2026 02:56:27 +0000 (0:00:01.192) 0:06:34.297 ****** 2026-02-03 02:56:36.183246 | orchestrator | changed: [testbed-manager] 2026-02-03 02:56:36.183257 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:56:36.183353 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:56:36.183369 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:56:36.183380 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:56:36.183390 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:56:36.183403 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:56:36.183421 | orchestrator | 2026-02-03 02:56:36.183453 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-03 02:56:36.183475 | orchestrator | Tuesday 03 February 2026 02:56:29 +0000 (0:00:01.305) 0:06:35.603 ****** 2026-02-03 02:56:36.183494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:56:36.183506 | orchestrator | 2026-02-03 02:56:36.183516 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-03 02:56:36.183527 | orchestrator | Tuesday 03 February 2026 02:56:29 +0000 (0:00:00.901) 0:06:36.504 ****** 2026-02-03 02:56:36.183538 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:56:36.183549 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:36.183560 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:56:36.183571 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:56:36.183582 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:56:36.183593 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:56:36.183603 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:56:36.183614 | orchestrator | 2026-02-03 02:56:36.183625 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-03 02:56:36.183636 | orchestrator | Tuesday 03 February 2026 02:56:31 +0000 (0:00:01.233) 0:06:37.737 ****** 2026-02-03 02:56:36.183647 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:56:36.183658 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:36.183669 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:56:36.183679 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:56:36.183690 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:56:36.183715 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:56:36.183726 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:56:36.183737 | orchestrator | 2026-02-03 02:56:36.183748 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-03 02:56:36.183759 | orchestrator | Tuesday 03 February 2026 02:56:32 +0000 (0:00:01.093) 0:06:38.831 ****** 2026-02-03 02:56:36.183769 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:56:36.183780 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:56:36.183791 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:56:36.183801 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:56:36.183812 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:56:36.183823 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:56:36.183833 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:36.183844 | orchestrator | 2026-02-03 02:56:36.183855 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-03 02:56:36.183866 | orchestrator | Tuesday 03 February 2026 02:56:33 +0000 (0:00:01.640) 0:06:40.471 ****** 2026-02-03 02:56:36.183877 | orchestrator | ok: [testbed-manager] 2026-02-03 02:56:36.183888 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:56:36.183898 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:56:36.183909 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:56:36.183919 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:56:36.183930 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:56:36.183941 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:56:36.183952 | orchestrator | 2026-02-03 02:56:36.183962 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-03 02:56:36.183973 | orchestrator | Tuesday 03 February 2026 02:56:35 +0000 (0:00:01.139) 0:06:41.611 ****** 2026-02-03 02:56:36.183984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:56:36.183995 | orchestrator | 2026-02-03 02:56:36.184006 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-03 02:56:36.184017 | orchestrator | Tuesday 03 February 2026 02:56:35 +0000 (0:00:00.850) 0:06:42.461 ****** 2026-02-03 02:56:36.184027 | orchestrator | 2026-02-03 02:56:36.184038 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-03 02:56:36.184057 | orchestrator | Tuesday 03 February 2026 02:56:35 +0000 (0:00:00.036) 0:06:42.498 ****** 2026-02-03 02:56:36.184068 | orchestrator | 2026-02-03 02:56:36.184079 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-03 02:56:36.184090 | orchestrator | Tuesday 03 February 2026 02:56:35 +0000 (0:00:00.036) 0:06:42.535 ****** 2026-02-03 02:56:36.184100 | orchestrator | 2026-02-03 02:56:36.184111 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-03 02:56:36.184132 | orchestrator | Tuesday 03 February 2026 02:56:36 +0000 (0:00:00.040) 0:06:42.575 ****** 2026-02-03 02:57:03.765898 | orchestrator | 2026-02-03 02:57:03.765988 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-03 02:57:03.765998 | orchestrator | Tuesday 03 February 2026 02:56:36 +0000 (0:00:00.038) 0:06:42.614 ****** 2026-02-03 02:57:03.766005 | orchestrator | 2026-02-03 02:57:03.766011 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-03 02:57:03.766086 | orchestrator | Tuesday 03 February 2026 02:56:36 +0000 (0:00:00.040) 0:06:42.654 ****** 2026-02-03 02:57:03.766106 | orchestrator | 2026-02-03 02:57:03.766116 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-03 02:57:03.766127 | orchestrator | Tuesday 03 February 2026 02:56:36 +0000 (0:00:00.041) 0:06:42.696 ****** 2026-02-03 02:57:03.766137 | orchestrator | 2026-02-03 02:57:03.766147 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-03 02:57:03.766155 | orchestrator | Tuesday 03 February 2026 02:56:36 +0000 (0:00:00.037) 0:06:42.733 ****** 2026-02-03 02:57:03.766162 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:03.766169 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:03.766175 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:03.766181 | orchestrator | 2026-02-03 02:57:03.766187 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-03 02:57:03.766193 | orchestrator | Tuesday 03 February 2026 02:56:37 +0000 (0:00:00.980) 0:06:43.713 ****** 2026-02-03 02:57:03.766200 | orchestrator | changed: [testbed-manager] 2026-02-03 02:57:03.766206 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:57:03.766212 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:57:03.766218 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:57:03.766224 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:57:03.766230 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:57:03.766235 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:57:03.766241 | orchestrator | 2026-02-03 02:57:03.766247 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-03 02:57:03.766253 | orchestrator | Tuesday 03 February 2026 02:56:38 +0000 (0:00:01.262) 0:06:44.975 ****** 2026-02-03 02:57:03.766258 | orchestrator | changed: [testbed-manager] 2026-02-03 02:57:03.766264 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:57:03.766270 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:57:03.766276 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:57:03.766281 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:57:03.766287 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:57:03.766293 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:57:03.766298 | orchestrator | 2026-02-03 02:57:03.766322 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-03 02:57:03.766329 | orchestrator | Tuesday 03 February 2026 02:56:39 +0000 (0:00:01.527) 0:06:46.503 ****** 2026-02-03 02:57:03.766335 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:57:03.766341 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:57:03.766347 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:57:03.766352 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:57:03.766359 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:57:03.766364 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:57:03.766370 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:57:03.766376 | orchestrator | 2026-02-03 02:57:03.766382 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-03 02:57:03.766388 | orchestrator | Tuesday 03 February 2026 02:56:42 +0000 (0:00:02.841) 0:06:49.345 ****** 2026-02-03 02:57:03.766421 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:57:03.766428 | orchestrator | 2026-02-03 02:57:03.766434 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-03 02:57:03.766440 | orchestrator | Tuesday 03 February 2026 02:56:42 +0000 (0:00:00.104) 0:06:49.449 ****** 2026-02-03 02:57:03.766446 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:03.766452 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:57:03.766458 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:57:03.766464 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:57:03.766471 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:57:03.766478 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:57:03.766484 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:57:03.766491 | orchestrator | 2026-02-03 02:57:03.766498 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-03 02:57:03.766506 | orchestrator | Tuesday 03 February 2026 02:56:43 +0000 (0:00:01.067) 0:06:50.516 ****** 2026-02-03 02:57:03.766513 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:57:03.766519 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:57:03.766526 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:57:03.766533 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:57:03.766539 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:57:03.766546 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:57:03.766553 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:57:03.766559 | orchestrator | 2026-02-03 02:57:03.766566 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-03 02:57:03.766591 | orchestrator | Tuesday 03 February 2026 02:56:44 +0000 (0:00:00.546) 0:06:51.063 ****** 2026-02-03 02:57:03.766599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:57:03.766607 | orchestrator | 2026-02-03 02:57:03.766615 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-03 02:57:03.766630 | orchestrator | Tuesday 03 February 2026 02:56:45 +0000 (0:00:01.155) 0:06:52.218 ****** 2026-02-03 02:57:03.766636 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:03.766643 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:03.766650 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:03.766657 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:03.766663 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:03.766670 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:03.766677 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:03.766683 | orchestrator | 2026-02-03 02:57:03.766690 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-03 02:57:03.766697 | orchestrator | Tuesday 03 February 2026 02:56:46 +0000 (0:00:00.866) 0:06:53.084 ****** 2026-02-03 02:57:03.766704 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-03 02:57:03.766726 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-03 02:57:03.766734 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-03 02:57:03.766741 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-03 02:57:03.766748 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-03 02:57:03.766754 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-03 02:57:03.766761 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-03 02:57:03.766768 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-03 02:57:03.766774 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-03 02:57:03.766782 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-03 02:57:03.766788 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-03 02:57:03.766795 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-03 02:57:03.766807 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-03 02:57:03.766814 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-03 02:57:03.766821 | orchestrator | 2026-02-03 02:57:03.766827 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-03 02:57:03.766833 | orchestrator | Tuesday 03 February 2026 02:56:49 +0000 (0:00:02.545) 0:06:55.630 ****** 2026-02-03 02:57:03.766839 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:57:03.766845 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:57:03.766851 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:57:03.766857 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:57:03.766863 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:57:03.766869 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:57:03.766874 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:57:03.766880 | orchestrator | 2026-02-03 02:57:03.766886 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-03 02:57:03.766892 | orchestrator | Tuesday 03 February 2026 02:56:49 +0000 (0:00:00.693) 0:06:56.323 ****** 2026-02-03 02:57:03.766900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:57:03.766907 | orchestrator | 2026-02-03 02:57:03.766913 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-03 02:57:03.766919 | orchestrator | Tuesday 03 February 2026 02:56:50 +0000 (0:00:00.839) 0:06:57.163 ****** 2026-02-03 02:57:03.766924 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:03.766930 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:03.766936 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:03.766942 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:03.766947 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:03.766953 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:03.766959 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:03.766965 | orchestrator | 2026-02-03 02:57:03.766971 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-03 02:57:03.767026 | orchestrator | Tuesday 03 February 2026 02:56:51 +0000 (0:00:00.873) 0:06:58.036 ****** 2026-02-03 02:57:03.767038 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:03.767044 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:03.767050 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:03.767056 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:03.767062 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:03.767067 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:03.767073 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:03.767079 | orchestrator | 2026-02-03 02:57:03.767089 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-03 02:57:03.767142 | orchestrator | Tuesday 03 February 2026 02:56:52 +0000 (0:00:00.998) 0:06:59.035 ****** 2026-02-03 02:57:03.767153 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:57:03.767163 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:57:03.767172 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:57:03.767178 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:57:03.767183 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:57:03.767189 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:57:03.767195 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:57:03.767200 | orchestrator | 2026-02-03 02:57:03.767206 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-03 02:57:03.767212 | orchestrator | Tuesday 03 February 2026 02:56:52 +0000 (0:00:00.532) 0:06:59.568 ****** 2026-02-03 02:57:03.767218 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:03.767223 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:03.767229 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:03.767235 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:03.767240 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:03.767252 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:03.767258 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:03.767263 | orchestrator | 2026-02-03 02:57:03.767269 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-03 02:57:03.767275 | orchestrator | Tuesday 03 February 2026 02:56:54 +0000 (0:00:01.678) 0:07:01.246 ****** 2026-02-03 02:57:03.767281 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:57:03.767287 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:57:03.767292 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:57:03.767298 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:57:03.767318 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:57:03.767328 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:57:03.767334 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:57:03.767340 | orchestrator | 2026-02-03 02:57:03.767345 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-03 02:57:03.767351 | orchestrator | Tuesday 03 February 2026 02:56:55 +0000 (0:00:00.537) 0:07:01.783 ****** 2026-02-03 02:57:03.767357 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:03.767363 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:57:03.767369 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:57:03.767375 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:57:03.767381 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:57:03.767386 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:57:03.767398 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:57:37.077286 | orchestrator | 2026-02-03 02:57:37.077432 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-03 02:57:37.077445 | orchestrator | Tuesday 03 February 2026 02:57:03 +0000 (0:00:08.534) 0:07:10.318 ****** 2026-02-03 02:57:37.077452 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.077458 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:57:37.077465 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:57:37.077471 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:57:37.077476 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:57:37.077482 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:57:37.077488 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:57:37.077493 | orchestrator | 2026-02-03 02:57:37.077500 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-03 02:57:37.077505 | orchestrator | Tuesday 03 February 2026 02:57:05 +0000 (0:00:01.541) 0:07:11.859 ****** 2026-02-03 02:57:37.077511 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.077516 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:57:37.077522 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:57:37.077527 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:57:37.077533 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:57:37.077538 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:57:37.077544 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:57:37.077549 | orchestrator | 2026-02-03 02:57:37.077555 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-03 02:57:37.077560 | orchestrator | Tuesday 03 February 2026 02:57:07 +0000 (0:00:01.813) 0:07:13.673 ****** 2026-02-03 02:57:37.077566 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.077571 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:57:37.077577 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:57:37.077582 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:57:37.077588 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:57:37.077593 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:57:37.077598 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:57:37.077604 | orchestrator | 2026-02-03 02:57:37.077609 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-03 02:57:37.077615 | orchestrator | Tuesday 03 February 2026 02:57:08 +0000 (0:00:01.715) 0:07:15.389 ****** 2026-02-03 02:57:37.077620 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.077626 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:37.077632 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:37.077655 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:37.077661 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:37.077666 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:37.077671 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:37.077677 | orchestrator | 2026-02-03 02:57:37.077682 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-03 02:57:37.077687 | orchestrator | Tuesday 03 February 2026 02:57:09 +0000 (0:00:00.913) 0:07:16.302 ****** 2026-02-03 02:57:37.077693 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:57:37.077699 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:57:37.077704 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:57:37.077710 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:57:37.077715 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:57:37.077720 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:57:37.077726 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:57:37.077731 | orchestrator | 2026-02-03 02:57:37.077737 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-03 02:57:37.077742 | orchestrator | Tuesday 03 February 2026 02:57:10 +0000 (0:00:01.014) 0:07:17.317 ****** 2026-02-03 02:57:37.077748 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:57:37.077753 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:57:37.077759 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:57:37.077764 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:57:37.077769 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:57:37.077774 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:57:37.077780 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:57:37.077785 | orchestrator | 2026-02-03 02:57:37.077790 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-03 02:57:37.077796 | orchestrator | Tuesday 03 February 2026 02:57:11 +0000 (0:00:00.564) 0:07:17.881 ****** 2026-02-03 02:57:37.077801 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.077819 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:37.077825 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:37.077831 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:37.077836 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:37.077842 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:37.077848 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:37.077855 | orchestrator | 2026-02-03 02:57:37.077861 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-03 02:57:37.077868 | orchestrator | Tuesday 03 February 2026 02:57:11 +0000 (0:00:00.536) 0:07:18.418 ****** 2026-02-03 02:57:37.077874 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.077881 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:37.077888 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:37.077894 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:37.077903 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:37.077912 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:37.077920 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:37.077929 | orchestrator | 2026-02-03 02:57:37.077938 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-03 02:57:37.077948 | orchestrator | Tuesday 03 February 2026 02:57:12 +0000 (0:00:00.528) 0:07:18.947 ****** 2026-02-03 02:57:37.077956 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.077965 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:37.077973 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:37.077981 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:37.077990 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:37.077998 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:37.078006 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:37.078069 | orchestrator | 2026-02-03 02:57:37.078080 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-03 02:57:37.078090 | orchestrator | Tuesday 03 February 2026 02:57:13 +0000 (0:00:00.732) 0:07:19.680 ****** 2026-02-03 02:57:37.078099 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.078108 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:37.078152 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:37.078162 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:37.078171 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:37.078179 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:37.078188 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:37.078196 | orchestrator | 2026-02-03 02:57:37.078223 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-03 02:57:37.078232 | orchestrator | Tuesday 03 February 2026 02:57:18 +0000 (0:00:05.764) 0:07:25.445 ****** 2026-02-03 02:57:37.078241 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:57:37.078250 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:57:37.078259 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:57:37.078267 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:57:37.078276 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:57:37.078284 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:57:37.078293 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:57:37.078303 | orchestrator | 2026-02-03 02:57:37.078312 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-03 02:57:37.078320 | orchestrator | Tuesday 03 February 2026 02:57:19 +0000 (0:00:00.533) 0:07:25.978 ****** 2026-02-03 02:57:37.078331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:57:37.078342 | orchestrator | 2026-02-03 02:57:37.078379 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-03 02:57:37.078389 | orchestrator | Tuesday 03 February 2026 02:57:20 +0000 (0:00:01.019) 0:07:26.998 ****** 2026-02-03 02:57:37.078398 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.078407 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:37.078414 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:37.078422 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:37.078431 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:37.078439 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:37.078447 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:37.078455 | orchestrator | 2026-02-03 02:57:37.078462 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-03 02:57:37.078470 | orchestrator | Tuesday 03 February 2026 02:57:22 +0000 (0:00:02.056) 0:07:29.054 ****** 2026-02-03 02:57:37.078479 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.078487 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:37.078495 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:37.078503 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:37.078511 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:37.078519 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:37.078527 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:37.078535 | orchestrator | 2026-02-03 02:57:37.078543 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-03 02:57:37.078551 | orchestrator | Tuesday 03 February 2026 02:57:23 +0000 (0:00:01.254) 0:07:30.308 ****** 2026-02-03 02:57:37.078559 | orchestrator | ok: [testbed-manager] 2026-02-03 02:57:37.078567 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:57:37.078576 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:57:37.078585 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:57:37.078592 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:57:37.078602 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:57:37.078611 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:57:37.078620 | orchestrator | 2026-02-03 02:57:37.078628 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-03 02:57:37.078636 | orchestrator | Tuesday 03 February 2026 02:57:24 +0000 (0:00:00.868) 0:07:31.177 ****** 2026-02-03 02:57:37.078653 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-03 02:57:37.078663 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-03 02:57:37.078683 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-03 02:57:37.078692 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-03 02:57:37.078701 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-03 02:57:37.078710 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-03 02:57:37.078719 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-03 02:57:37.078728 | orchestrator | 2026-02-03 02:57:37.078736 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-03 02:57:37.078744 | orchestrator | Tuesday 03 February 2026 02:57:26 +0000 (0:00:01.845) 0:07:33.022 ****** 2026-02-03 02:57:37.078754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:57:37.078764 | orchestrator | 2026-02-03 02:57:37.078773 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-03 02:57:37.078783 | orchestrator | Tuesday 03 February 2026 02:57:27 +0000 (0:00:00.844) 0:07:33.867 ****** 2026-02-03 02:57:37.078792 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:57:37.078801 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:57:37.078811 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:57:37.078819 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:57:37.078827 | orchestrator | changed: [testbed-manager] 2026-02-03 02:57:37.078836 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:57:37.078845 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:57:37.078854 | orchestrator | 2026-02-03 02:57:37.078875 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-03 02:58:09.483814 | orchestrator | Tuesday 03 February 2026 02:57:37 +0000 (0:00:09.762) 0:07:43.630 ****** 2026-02-03 02:58:09.483915 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:58:09.483928 | orchestrator | ok: [testbed-manager] 2026-02-03 02:58:09.483936 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:58:09.483944 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:58:09.483952 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:58:09.483960 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:58:09.483968 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:58:09.483977 | orchestrator | 2026-02-03 02:58:09.483987 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-03 02:58:09.483996 | orchestrator | Tuesday 03 February 2026 02:57:39 +0000 (0:00:02.210) 0:07:45.841 ****** 2026-02-03 02:58:09.484003 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:58:09.484011 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:58:09.484019 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:58:09.484027 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:58:09.484035 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:58:09.484043 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:58:09.484051 | orchestrator | 2026-02-03 02:58:09.484059 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-03 02:58:09.484067 | orchestrator | Tuesday 03 February 2026 02:57:40 +0000 (0:00:01.318) 0:07:47.159 ****** 2026-02-03 02:58:09.484075 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:09.484084 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:09.484092 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:09.484100 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:09.484108 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:09.484137 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:09.484146 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:09.484154 | orchestrator | 2026-02-03 02:58:09.484163 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-03 02:58:09.484172 | orchestrator | 2026-02-03 02:58:09.484181 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-03 02:58:09.484189 | orchestrator | Tuesday 03 February 2026 02:57:41 +0000 (0:00:01.277) 0:07:48.437 ****** 2026-02-03 02:58:09.484198 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:58:09.484207 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:58:09.484216 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:58:09.484224 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:58:09.484233 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:58:09.484241 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:58:09.484250 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:58:09.484259 | orchestrator | 2026-02-03 02:58:09.484268 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-03 02:58:09.484276 | orchestrator | 2026-02-03 02:58:09.484285 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-03 02:58:09.484294 | orchestrator | Tuesday 03 February 2026 02:57:42 +0000 (0:00:00.738) 0:07:49.175 ****** 2026-02-03 02:58:09.484303 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:09.484312 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:09.484320 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:09.484329 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:09.484338 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:09.484346 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:09.484355 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:09.484363 | orchestrator | 2026-02-03 02:58:09.484372 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-03 02:58:09.484415 | orchestrator | Tuesday 03 February 2026 02:57:43 +0000 (0:00:01.351) 0:07:50.526 ****** 2026-02-03 02:58:09.484426 | orchestrator | ok: [testbed-manager] 2026-02-03 02:58:09.484435 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:58:09.484444 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:58:09.484453 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:58:09.484461 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:58:09.484470 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:58:09.484479 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:58:09.484487 | orchestrator | 2026-02-03 02:58:09.484496 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-03 02:58:09.484505 | orchestrator | Tuesday 03 February 2026 02:57:45 +0000 (0:00:01.469) 0:07:51.996 ****** 2026-02-03 02:58:09.484513 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:58:09.484522 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:58:09.484531 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:58:09.484540 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:58:09.484548 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:58:09.484556 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:58:09.484565 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:58:09.484574 | orchestrator | 2026-02-03 02:58:09.484583 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-03 02:58:09.484592 | orchestrator | Tuesday 03 February 2026 02:57:45 +0000 (0:00:00.536) 0:07:52.533 ****** 2026-02-03 02:58:09.484601 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:58:09.484612 | orchestrator | 2026-02-03 02:58:09.484621 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-03 02:58:09.484629 | orchestrator | Tuesday 03 February 2026 02:57:46 +0000 (0:00:01.035) 0:07:53.569 ****** 2026-02-03 02:58:09.484639 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:58:09.484658 | orchestrator | 2026-02-03 02:58:09.484667 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-03 02:58:09.484676 | orchestrator | Tuesday 03 February 2026 02:57:47 +0000 (0:00:00.822) 0:07:54.392 ****** 2026-02-03 02:58:09.484684 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:09.484693 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:09.484702 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:09.484711 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:09.484720 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:09.484729 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:09.484737 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:09.484746 | orchestrator | 2026-02-03 02:58:09.484773 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-03 02:58:09.484782 | orchestrator | Tuesday 03 February 2026 02:57:57 +0000 (0:00:09.649) 0:08:04.041 ****** 2026-02-03 02:58:09.484790 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:09.484798 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:09.484806 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:09.484813 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:09.484820 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:09.484828 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:09.484835 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:09.484842 | orchestrator | 2026-02-03 02:58:09.484850 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-03 02:58:09.484857 | orchestrator | Tuesday 03 February 2026 02:57:58 +0000 (0:00:01.056) 0:08:05.098 ****** 2026-02-03 02:58:09.484865 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:09.484872 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:09.484880 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:09.484887 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:09.484895 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:09.484902 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:09.484910 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:09.484918 | orchestrator | 2026-02-03 02:58:09.484925 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-03 02:58:09.484932 | orchestrator | Tuesday 03 February 2026 02:57:59 +0000 (0:00:01.430) 0:08:06.529 ****** 2026-02-03 02:58:09.484940 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:09.484948 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:09.484955 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:09.484962 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:09.484970 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:09.484978 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:09.484986 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:09.484995 | orchestrator | 2026-02-03 02:58:09.485003 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-03 02:58:09.485012 | orchestrator | Tuesday 03 February 2026 02:58:01 +0000 (0:00:01.923) 0:08:08.452 ****** 2026-02-03 02:58:09.485020 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:09.485028 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:09.485036 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:09.485045 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:09.485053 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:09.485061 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:09.485069 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:09.485077 | orchestrator | 2026-02-03 02:58:09.485085 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-03 02:58:09.485093 | orchestrator | Tuesday 03 February 2026 02:58:03 +0000 (0:00:01.227) 0:08:09.679 ****** 2026-02-03 02:58:09.485102 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:09.485109 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:09.485125 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:09.485132 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:09.485139 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:09.485145 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:09.485152 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:09.485159 | orchestrator | 2026-02-03 02:58:09.485167 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-03 02:58:09.485175 | orchestrator | 2026-02-03 02:58:09.485191 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-03 02:58:09.485200 | orchestrator | Tuesday 03 February 2026 02:58:04 +0000 (0:00:01.151) 0:08:10.830 ****** 2026-02-03 02:58:09.485207 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:58:09.485216 | orchestrator | 2026-02-03 02:58:09.485224 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-03 02:58:09.485231 | orchestrator | Tuesday 03 February 2026 02:58:05 +0000 (0:00:00.842) 0:08:11.673 ****** 2026-02-03 02:58:09.485239 | orchestrator | ok: [testbed-manager] 2026-02-03 02:58:09.485247 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:58:09.485252 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:58:09.485256 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:58:09.485261 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:58:09.485266 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:58:09.485270 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:58:09.485275 | orchestrator | 2026-02-03 02:58:09.485280 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-03 02:58:09.485285 | orchestrator | Tuesday 03 February 2026 02:58:06 +0000 (0:00:01.096) 0:08:12.770 ****** 2026-02-03 02:58:09.485290 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:09.485295 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:09.485299 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:09.485304 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:09.485309 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:09.485313 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:09.485318 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:09.485323 | orchestrator | 2026-02-03 02:58:09.485328 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-03 02:58:09.485335 | orchestrator | Tuesday 03 February 2026 02:58:07 +0000 (0:00:01.324) 0:08:14.094 ****** 2026-02-03 02:58:09.485342 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 02:58:09.485349 | orchestrator | 2026-02-03 02:58:09.485356 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-03 02:58:09.485362 | orchestrator | Tuesday 03 February 2026 02:58:08 +0000 (0:00:00.998) 0:08:15.093 ****** 2026-02-03 02:58:09.485375 | orchestrator | ok: [testbed-manager] 2026-02-03 02:58:09.485386 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:58:09.485415 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:58:09.485422 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:58:09.485430 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:58:09.485437 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:58:09.485446 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:58:09.485453 | orchestrator | 2026-02-03 02:58:09.485474 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-03 02:58:11.118309 | orchestrator | Tuesday 03 February 2026 02:58:09 +0000 (0:00:00.943) 0:08:16.036 ****** 2026-02-03 02:58:11.118475 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:11.118494 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:11.118506 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:11.118517 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:11.118528 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:11.118538 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:11.118549 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:11.118589 | orchestrator | 2026-02-03 02:58:11.118602 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:58:11.118614 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-03 02:58:11.118627 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-03 02:58:11.118638 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-03 02:58:11.118649 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-03 02:58:11.118659 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-03 02:58:11.118670 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-03 02:58:11.118681 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-03 02:58:11.118692 | orchestrator | 2026-02-03 02:58:11.118703 | orchestrator | 2026-02-03 02:58:11.118714 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 02:58:11.118725 | orchestrator | Tuesday 03 February 2026 02:58:10 +0000 (0:00:01.143) 0:08:17.179 ****** 2026-02-03 02:58:11.118736 | orchestrator | =============================================================================== 2026-02-03 02:58:11.118747 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.84s 2026-02-03 02:58:11.118758 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.91s 2026-02-03 02:58:11.118769 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.21s 2026-02-03 02:58:11.118779 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.49s 2026-02-03 02:58:11.118790 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.38s 2026-02-03 02:58:11.118816 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.27s 2026-02-03 02:58:11.118829 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.21s 2026-02-03 02:58:11.118854 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.39s 2026-02-03 02:58:11.118867 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.85s 2026-02-03 02:58:11.118880 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.76s 2026-02-03 02:58:11.118894 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.65s 2026-02-03 02:58:11.118906 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.53s 2026-02-03 02:58:11.118918 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.40s 2026-02-03 02:58:11.118931 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.25s 2026-02-03 02:58:11.118943 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.13s 2026-02-03 02:58:11.118956 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.40s 2026-02-03 02:58:11.118968 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.18s 2026-02-03 02:58:11.118982 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.17s 2026-02-03 02:58:11.118994 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.90s 2026-02-03 02:58:11.119007 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.76s 2026-02-03 02:58:11.436239 | orchestrator | + osism apply fail2ban 2026-02-03 02:58:24.317189 | orchestrator | 2026-02-03 02:58:24 | INFO  | Task b1942a58-d52d-40a6-899f-6c8f73ce3fd3 (fail2ban) was prepared for execution. 2026-02-03 02:58:24.317277 | orchestrator | 2026-02-03 02:58:24 | INFO  | It takes a moment until task b1942a58-d52d-40a6-899f-6c8f73ce3fd3 (fail2ban) has been started and output is visible here. 2026-02-03 02:58:46.993975 | orchestrator | 2026-02-03 02:58:46.994116 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-03 02:58:46.994129 | orchestrator | 2026-02-03 02:58:46.994158 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-03 02:58:46.994165 | orchestrator | Tuesday 03 February 2026 02:58:28 +0000 (0:00:00.281) 0:00:00.281 ****** 2026-02-03 02:58:46.994172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 02:58:46.994180 | orchestrator | 2026-02-03 02:58:46.994187 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-03 02:58:46.994193 | orchestrator | Tuesday 03 February 2026 02:58:30 +0000 (0:00:01.148) 0:00:01.430 ****** 2026-02-03 02:58:46.994201 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:46.994209 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:46.994215 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:46.994221 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:46.994227 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:46.994234 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:46.994240 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:46.994247 | orchestrator | 2026-02-03 02:58:46.994254 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-03 02:58:46.994260 | orchestrator | Tuesday 03 February 2026 02:58:41 +0000 (0:00:11.905) 0:00:13.335 ****** 2026-02-03 02:58:46.994266 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:46.994273 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:46.994279 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:46.994287 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:46.994291 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:46.994295 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:46.994299 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:46.994303 | orchestrator | 2026-02-03 02:58:46.994308 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-03 02:58:46.994312 | orchestrator | Tuesday 03 February 2026 02:58:43 +0000 (0:00:01.505) 0:00:14.841 ****** 2026-02-03 02:58:46.994317 | orchestrator | ok: [testbed-manager] 2026-02-03 02:58:46.994322 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:58:46.994326 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:58:46.994330 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:58:46.994334 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:58:46.994338 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:58:46.994341 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:58:46.994345 | orchestrator | 2026-02-03 02:58:46.994349 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-03 02:58:46.994353 | orchestrator | Tuesday 03 February 2026 02:58:44 +0000 (0:00:01.467) 0:00:16.309 ****** 2026-02-03 02:58:46.994357 | orchestrator | changed: [testbed-manager] 2026-02-03 02:58:46.994361 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:58:46.994365 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:58:46.994369 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:58:46.994372 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:58:46.994376 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:58:46.994380 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:58:46.994384 | orchestrator | 2026-02-03 02:58:46.994388 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:58:46.994392 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:58:46.994538 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:58:46.994547 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:58:46.994552 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:58:46.994569 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:58:46.994574 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:58:46.994579 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 02:58:46.994583 | orchestrator | 2026-02-03 02:58:46.994588 | orchestrator | 2026-02-03 02:58:46.994592 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 02:58:46.994596 | orchestrator | Tuesday 03 February 2026 02:58:46 +0000 (0:00:01.646) 0:00:17.955 ****** 2026-02-03 02:58:46.994601 | orchestrator | =============================================================================== 2026-02-03 02:58:46.994605 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.91s 2026-02-03 02:58:46.994610 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-02-03 02:58:46.994614 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.51s 2026-02-03 02:58:46.994618 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.47s 2026-02-03 02:58:46.994623 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.15s 2026-02-03 02:58:47.311049 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-03 02:58:47.311148 | orchestrator | + osism apply network 2026-02-03 02:58:59.411081 | orchestrator | 2026-02-03 02:58:59 | INFO  | Task d35cd1f8-f023-44d9-9a32-23a17ab66136 (network) was prepared for execution. 2026-02-03 02:58:59.411242 | orchestrator | 2026-02-03 02:58:59 | INFO  | It takes a moment until task d35cd1f8-f023-44d9-9a32-23a17ab66136 (network) has been started and output is visible here. 2026-02-03 02:59:28.967840 | orchestrator | 2026-02-03 02:59:28.967968 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-03 02:59:28.967992 | orchestrator | 2026-02-03 02:59:28.968010 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-03 02:59:28.968025 | orchestrator | Tuesday 03 February 2026 02:59:03 +0000 (0:00:00.255) 0:00:00.255 ****** 2026-02-03 02:59:28.968042 | orchestrator | ok: [testbed-manager] 2026-02-03 02:59:28.968057 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:59:28.968071 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:59:28.968086 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:59:28.968099 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:59:28.968114 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:59:28.968128 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:59:28.968142 | orchestrator | 2026-02-03 02:59:28.968157 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-03 02:59:28.968172 | orchestrator | Tuesday 03 February 2026 02:59:04 +0000 (0:00:00.717) 0:00:00.973 ****** 2026-02-03 02:59:28.968189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 02:59:28.968209 | orchestrator | 2026-02-03 02:59:28.968224 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-03 02:59:28.968269 | orchestrator | Tuesday 03 February 2026 02:59:05 +0000 (0:00:01.269) 0:00:02.242 ****** 2026-02-03 02:59:28.968286 | orchestrator | ok: [testbed-manager] 2026-02-03 02:59:28.968301 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:59:28.968316 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:59:28.968330 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:59:28.968346 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:59:28.968360 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:59:28.968376 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:59:28.968391 | orchestrator | 2026-02-03 02:59:28.968407 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-03 02:59:28.968422 | orchestrator | Tuesday 03 February 2026 02:59:07 +0000 (0:00:02.079) 0:00:04.322 ****** 2026-02-03 02:59:28.968438 | orchestrator | ok: [testbed-manager] 2026-02-03 02:59:28.968453 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:59:28.968471 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:59:28.968487 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:59:28.968527 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:59:28.968542 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:59:28.968556 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:59:28.968570 | orchestrator | 2026-02-03 02:59:28.968586 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-03 02:59:28.968601 | orchestrator | Tuesday 03 February 2026 02:59:09 +0000 (0:00:01.858) 0:00:06.180 ****** 2026-02-03 02:59:28.968616 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-03 02:59:28.968633 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-03 02:59:28.968649 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-03 02:59:28.968664 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-03 02:59:28.968679 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-03 02:59:28.968693 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-03 02:59:28.968708 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-03 02:59:28.968721 | orchestrator | 2026-02-03 02:59:28.968756 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-03 02:59:28.968779 | orchestrator | Tuesday 03 February 2026 02:59:10 +0000 (0:00:01.053) 0:00:07.234 ****** 2026-02-03 02:59:28.968794 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 02:59:28.968810 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 02:59:28.968825 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-03 02:59:28.968840 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 02:59:28.968855 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-03 02:59:28.968870 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 02:59:28.968884 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 02:59:28.968900 | orchestrator | 2026-02-03 02:59:28.968915 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-03 02:59:28.968930 | orchestrator | Tuesday 03 February 2026 02:59:14 +0000 (0:00:03.581) 0:00:10.816 ****** 2026-02-03 02:59:28.968945 | orchestrator | changed: [testbed-manager] 2026-02-03 02:59:28.968961 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:59:28.968976 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:59:28.968990 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:59:28.969006 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:59:28.969021 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:59:28.969036 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:59:28.969051 | orchestrator | 2026-02-03 02:59:28.969065 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-03 02:59:28.969081 | orchestrator | Tuesday 03 February 2026 02:59:16 +0000 (0:00:01.683) 0:00:12.499 ****** 2026-02-03 02:59:28.969095 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 02:59:28.969110 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 02:59:28.969127 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-03 02:59:28.969142 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 02:59:28.969169 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-03 02:59:28.969185 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 02:59:28.969199 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 02:59:28.969214 | orchestrator | 2026-02-03 02:59:28.969228 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-03 02:59:28.969243 | orchestrator | Tuesday 03 February 2026 02:59:17 +0000 (0:00:01.690) 0:00:14.189 ****** 2026-02-03 02:59:28.969257 | orchestrator | ok: [testbed-manager] 2026-02-03 02:59:28.969272 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:59:28.969286 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:59:28.969299 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:59:28.969314 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:59:28.969328 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:59:28.969342 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:59:28.969357 | orchestrator | 2026-02-03 02:59:28.969373 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-03 02:59:28.969415 | orchestrator | Tuesday 03 February 2026 02:59:18 +0000 (0:00:01.167) 0:00:15.356 ****** 2026-02-03 02:59:28.969431 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:59:28.969446 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:59:28.969462 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:59:28.969477 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:59:28.969491 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:59:28.969540 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:59:28.969554 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:59:28.969569 | orchestrator | 2026-02-03 02:59:28.969584 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-03 02:59:28.969599 | orchestrator | Tuesday 03 February 2026 02:59:19 +0000 (0:00:00.697) 0:00:16.054 ****** 2026-02-03 02:59:28.969613 | orchestrator | ok: [testbed-manager] 2026-02-03 02:59:28.969628 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:59:28.969642 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:59:28.969656 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:59:28.969671 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:59:28.969685 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:59:28.969699 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:59:28.969714 | orchestrator | 2026-02-03 02:59:28.969728 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-03 02:59:28.969743 | orchestrator | Tuesday 03 February 2026 02:59:21 +0000 (0:00:02.258) 0:00:18.313 ****** 2026-02-03 02:59:28.969757 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:59:28.969772 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:59:28.969787 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:59:28.969802 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:59:28.969816 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:59:28.969830 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:59:28.969846 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-03 02:59:28.969862 | orchestrator | 2026-02-03 02:59:28.969877 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-03 02:59:28.969892 | orchestrator | Tuesday 03 February 2026 02:59:22 +0000 (0:00:00.935) 0:00:19.249 ****** 2026-02-03 02:59:28.969906 | orchestrator | ok: [testbed-manager] 2026-02-03 02:59:28.969920 | orchestrator | changed: [testbed-node-1] 2026-02-03 02:59:28.969935 | orchestrator | changed: [testbed-node-0] 2026-02-03 02:59:28.969949 | orchestrator | changed: [testbed-node-2] 2026-02-03 02:59:28.969964 | orchestrator | changed: [testbed-node-3] 2026-02-03 02:59:28.969979 | orchestrator | changed: [testbed-node-4] 2026-02-03 02:59:28.969994 | orchestrator | changed: [testbed-node-5] 2026-02-03 02:59:28.970008 | orchestrator | 2026-02-03 02:59:28.970096 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-03 02:59:28.970113 | orchestrator | Tuesday 03 February 2026 02:59:24 +0000 (0:00:01.651) 0:00:20.900 ****** 2026-02-03 02:59:28.970128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 02:59:28.970159 | orchestrator | 2026-02-03 02:59:28.970175 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-03 02:59:28.970190 | orchestrator | Tuesday 03 February 2026 02:59:25 +0000 (0:00:01.347) 0:00:22.247 ****** 2026-02-03 02:59:28.970204 | orchestrator | ok: [testbed-manager] 2026-02-03 02:59:28.970219 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:59:28.970234 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:59:28.970248 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:59:28.970269 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:59:28.970284 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:59:28.970299 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:59:28.970314 | orchestrator | 2026-02-03 02:59:28.970328 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-03 02:59:28.970341 | orchestrator | Tuesday 03 February 2026 02:59:26 +0000 (0:00:01.054) 0:00:23.302 ****** 2026-02-03 02:59:28.970354 | orchestrator | ok: [testbed-manager] 2026-02-03 02:59:28.970367 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:59:28.970380 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:59:28.970393 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:59:28.970406 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:59:28.970419 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:59:28.970432 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:59:28.970445 | orchestrator | 2026-02-03 02:59:28.970457 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-03 02:59:28.970471 | orchestrator | Tuesday 03 February 2026 02:59:27 +0000 (0:00:00.881) 0:00:24.183 ****** 2026-02-03 02:59:28.970485 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-03 02:59:28.970575 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-03 02:59:28.970591 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-03 02:59:28.970603 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-03 02:59:28.970616 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-03 02:59:28.970629 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-03 02:59:28.970643 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-03 02:59:28.970655 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-03 02:59:28.970668 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-03 02:59:28.970681 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-03 02:59:28.970693 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-03 02:59:28.970707 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-03 02:59:28.970720 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-03 02:59:28.970733 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-03 02:59:28.970746 | orchestrator | 2026-02-03 02:59:28.970772 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-03 02:59:46.149234 | orchestrator | Tuesday 03 February 2026 02:59:28 +0000 (0:00:01.263) 0:00:25.446 ****** 2026-02-03 02:59:46.149363 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:59:46.149379 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:59:46.149389 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:59:46.149400 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:59:46.149410 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:59:46.149420 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:59:46.149430 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:59:46.149439 | orchestrator | 2026-02-03 02:59:46.149568 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-03 02:59:46.149592 | orchestrator | Tuesday 03 February 2026 02:59:29 +0000 (0:00:00.636) 0:00:26.083 ****** 2026-02-03 02:59:46.149606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-5, testbed-node-0, testbed-node-3, testbed-node-4, testbed-node-2 2026-02-03 02:59:46.149619 | orchestrator | 2026-02-03 02:59:46.149629 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-03 02:59:46.149639 | orchestrator | Tuesday 03 February 2026 02:59:34 +0000 (0:00:04.560) 0:00:30.643 ****** 2026-02-03 02:59:46.149650 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149738 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.149754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.149764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.149775 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.149804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.149824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.149836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.149847 | orchestrator | 2026-02-03 02:59:46.149860 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-03 02:59:46.149872 | orchestrator | Tuesday 03 February 2026 02:59:40 +0000 (0:00:06.082) 0:00:36.726 ****** 2026-02-03 02:59:46.149884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149895 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149930 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.149970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-03 02:59:46.149982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.149994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.150006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.150083 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:46.150105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:52.491465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-03 02:59:52.491576 | orchestrator | 2026-02-03 02:59:52.491593 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-03 02:59:52.491604 | orchestrator | Tuesday 03 February 2026 02:59:46 +0000 (0:00:05.898) 0:00:42.625 ****** 2026-02-03 02:59:52.491615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 02:59:52.491625 | orchestrator | 2026-02-03 02:59:52.491634 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-03 02:59:52.491642 | orchestrator | Tuesday 03 February 2026 02:59:47 +0000 (0:00:01.285) 0:00:43.910 ****** 2026-02-03 02:59:52.491651 | orchestrator | ok: [testbed-manager] 2026-02-03 02:59:52.491661 | orchestrator | ok: [testbed-node-0] 2026-02-03 02:59:52.491670 | orchestrator | ok: [testbed-node-1] 2026-02-03 02:59:52.491679 | orchestrator | ok: [testbed-node-2] 2026-02-03 02:59:52.491688 | orchestrator | ok: [testbed-node-3] 2026-02-03 02:59:52.491697 | orchestrator | ok: [testbed-node-4] 2026-02-03 02:59:52.491706 | orchestrator | ok: [testbed-node-5] 2026-02-03 02:59:52.491716 | orchestrator | 2026-02-03 02:59:52.491726 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-03 02:59:52.491736 | orchestrator | Tuesday 03 February 2026 02:59:48 +0000 (0:00:01.181) 0:00:45.092 ****** 2026-02-03 02:59:52.491746 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-03 02:59:52.491758 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-03 02:59:52.491768 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-03 02:59:52.491777 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-03 02:59:52.491783 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:59:52.491790 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-03 02:59:52.491796 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-03 02:59:52.491802 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-03 02:59:52.491808 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-03 02:59:52.491814 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:59:52.491820 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-03 02:59:52.491838 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-03 02:59:52.491844 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-03 02:59:52.491850 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-03 02:59:52.491872 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:59:52.491878 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-03 02:59:52.491884 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-03 02:59:52.491890 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-03 02:59:52.491895 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-03 02:59:52.491902 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:59:52.491908 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-03 02:59:52.491913 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-03 02:59:52.491919 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-03 02:59:52.491925 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-03 02:59:52.491931 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:59:52.491936 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-03 02:59:52.491942 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-03 02:59:52.491948 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-03 02:59:52.491954 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-03 02:59:52.491959 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:59:52.491965 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-03 02:59:52.491971 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-03 02:59:52.491977 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-03 02:59:52.491982 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-03 02:59:52.491988 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:59:52.491994 | orchestrator | 2026-02-03 02:59:52.492000 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-03 02:59:52.492019 | orchestrator | Tuesday 03 February 2026 02:59:50 +0000 (0:00:02.071) 0:00:47.163 ****** 2026-02-03 02:59:52.492025 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:59:52.492031 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:59:52.492038 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:59:52.492044 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:59:52.492051 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:59:52.492058 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:59:52.492065 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:59:52.492071 | orchestrator | 2026-02-03 02:59:52.492078 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-03 02:59:52.492085 | orchestrator | Tuesday 03 February 2026 02:59:51 +0000 (0:00:00.673) 0:00:47.836 ****** 2026-02-03 02:59:52.492092 | orchestrator | skipping: [testbed-manager] 2026-02-03 02:59:52.492099 | orchestrator | skipping: [testbed-node-0] 2026-02-03 02:59:52.492105 | orchestrator | skipping: [testbed-node-1] 2026-02-03 02:59:52.492112 | orchestrator | skipping: [testbed-node-2] 2026-02-03 02:59:52.492119 | orchestrator | skipping: [testbed-node-3] 2026-02-03 02:59:52.492125 | orchestrator | skipping: [testbed-node-4] 2026-02-03 02:59:52.492132 | orchestrator | skipping: [testbed-node-5] 2026-02-03 02:59:52.492139 | orchestrator | 2026-02-03 02:59:52.492145 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 02:59:52.492153 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 02:59:52.492162 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 02:59:52.492174 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 02:59:52.492181 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 02:59:52.492187 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 02:59:52.492194 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 02:59:52.492201 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 02:59:52.492207 | orchestrator | 2026-02-03 02:59:52.492214 | orchestrator | 2026-02-03 02:59:52.492221 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 02:59:52.492227 | orchestrator | Tuesday 03 February 2026 02:59:52 +0000 (0:00:00.710) 0:00:48.547 ****** 2026-02-03 02:59:52.492238 | orchestrator | =============================================================================== 2026-02-03 02:59:52.492244 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.08s 2026-02-03 02:59:52.492251 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.90s 2026-02-03 02:59:52.492258 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.56s 2026-02-03 02:59:52.492265 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.58s 2026-02-03 02:59:52.492272 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.26s 2026-02-03 02:59:52.492279 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.08s 2026-02-03 02:59:52.492285 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.07s 2026-02-03 02:59:52.492291 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.86s 2026-02-03 02:59:52.492297 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.69s 2026-02-03 02:59:52.492302 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.68s 2026-02-03 02:59:52.492308 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.65s 2026-02-03 02:59:52.492314 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.35s 2026-02-03 02:59:52.492320 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.29s 2026-02-03 02:59:52.492326 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2026-02-03 02:59:52.492331 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2026-02-03 02:59:52.492337 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-02-03 02:59:52.492343 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2026-02-03 02:59:52.492349 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.05s 2026-02-03 02:59:52.492355 | orchestrator | osism.commons.network : Create required directories --------------------- 1.05s 2026-02-03 02:59:52.492360 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.94s 2026-02-03 02:59:52.798110 | orchestrator | + osism apply wireguard 2026-02-03 03:00:04.851749 | orchestrator | 2026-02-03 03:00:04 | INFO  | Task 383d6ec1-0482-4e05-90ac-6d642b09c270 (wireguard) was prepared for execution. 2026-02-03 03:00:04.851844 | orchestrator | 2026-02-03 03:00:04 | INFO  | It takes a moment until task 383d6ec1-0482-4e05-90ac-6d642b09c270 (wireguard) has been started and output is visible here. 2026-02-03 03:00:26.105870 | orchestrator | 2026-02-03 03:00:26.105989 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-03 03:00:26.106079 | orchestrator | 2026-02-03 03:00:26.106090 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-03 03:00:26.106097 | orchestrator | Tuesday 03 February 2026 03:00:09 +0000 (0:00:00.226) 0:00:00.227 ****** 2026-02-03 03:00:26.106104 | orchestrator | ok: [testbed-manager] 2026-02-03 03:00:26.106112 | orchestrator | 2026-02-03 03:00:26.106119 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-03 03:00:26.106126 | orchestrator | Tuesday 03 February 2026 03:00:10 +0000 (0:00:01.656) 0:00:01.883 ****** 2026-02-03 03:00:26.106131 | orchestrator | changed: [testbed-manager] 2026-02-03 03:00:26.106139 | orchestrator | 2026-02-03 03:00:26.106144 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-03 03:00:26.106148 | orchestrator | Tuesday 03 February 2026 03:00:18 +0000 (0:00:07.178) 0:00:09.061 ****** 2026-02-03 03:00:26.106152 | orchestrator | changed: [testbed-manager] 2026-02-03 03:00:26.106156 | orchestrator | 2026-02-03 03:00:26.106160 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-03 03:00:26.106164 | orchestrator | Tuesday 03 February 2026 03:00:18 +0000 (0:00:00.545) 0:00:09.607 ****** 2026-02-03 03:00:26.106167 | orchestrator | changed: [testbed-manager] 2026-02-03 03:00:26.106171 | orchestrator | 2026-02-03 03:00:26.106175 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-03 03:00:26.106179 | orchestrator | Tuesday 03 February 2026 03:00:19 +0000 (0:00:00.474) 0:00:10.082 ****** 2026-02-03 03:00:26.106182 | orchestrator | ok: [testbed-manager] 2026-02-03 03:00:26.106186 | orchestrator | 2026-02-03 03:00:26.106190 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-03 03:00:26.106194 | orchestrator | Tuesday 03 February 2026 03:00:19 +0000 (0:00:00.688) 0:00:10.770 ****** 2026-02-03 03:00:26.106197 | orchestrator | ok: [testbed-manager] 2026-02-03 03:00:26.106201 | orchestrator | 2026-02-03 03:00:26.106205 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-03 03:00:26.106209 | orchestrator | Tuesday 03 February 2026 03:00:20 +0000 (0:00:00.440) 0:00:11.211 ****** 2026-02-03 03:00:26.106212 | orchestrator | ok: [testbed-manager] 2026-02-03 03:00:26.106216 | orchestrator | 2026-02-03 03:00:26.106220 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-03 03:00:26.106224 | orchestrator | Tuesday 03 February 2026 03:00:20 +0000 (0:00:00.433) 0:00:11.644 ****** 2026-02-03 03:00:26.106228 | orchestrator | changed: [testbed-manager] 2026-02-03 03:00:26.106231 | orchestrator | 2026-02-03 03:00:26.106235 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-03 03:00:26.106239 | orchestrator | Tuesday 03 February 2026 03:00:21 +0000 (0:00:01.216) 0:00:12.860 ****** 2026-02-03 03:00:26.106243 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-03 03:00:26.106247 | orchestrator | changed: [testbed-manager] 2026-02-03 03:00:26.106251 | orchestrator | 2026-02-03 03:00:26.106254 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-03 03:00:26.106258 | orchestrator | Tuesday 03 February 2026 03:00:22 +0000 (0:00:01.006) 0:00:13.867 ****** 2026-02-03 03:00:26.106262 | orchestrator | changed: [testbed-manager] 2026-02-03 03:00:26.106266 | orchestrator | 2026-02-03 03:00:26.106271 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-03 03:00:26.106275 | orchestrator | Tuesday 03 February 2026 03:00:24 +0000 (0:00:01.727) 0:00:15.595 ****** 2026-02-03 03:00:26.106279 | orchestrator | changed: [testbed-manager] 2026-02-03 03:00:26.106282 | orchestrator | 2026-02-03 03:00:26.106286 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:00:26.106290 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:00:26.106296 | orchestrator | 2026-02-03 03:00:26.106300 | orchestrator | 2026-02-03 03:00:26.106304 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:00:26.106315 | orchestrator | Tuesday 03 February 2026 03:00:25 +0000 (0:00:00.992) 0:00:16.588 ****** 2026-02-03 03:00:26.106319 | orchestrator | =============================================================================== 2026-02-03 03:00:26.106323 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.18s 2026-02-03 03:00:26.106327 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2026-02-03 03:00:26.106330 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.66s 2026-02-03 03:00:26.106334 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2026-02-03 03:00:26.106338 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2026-02-03 03:00:26.106342 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2026-02-03 03:00:26.106346 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.69s 2026-02-03 03:00:26.106349 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-02-03 03:00:26.106353 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2026-02-03 03:00:26.106357 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-02-03 03:00:26.106361 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-02-03 03:00:26.406106 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-03 03:00:26.445345 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-03 03:00:26.445427 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-03 03:00:26.526352 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 185 0 --:--:-- --:--:-- --:--:-- 187 2026-02-03 03:00:26.541956 | orchestrator | + osism apply --environment custom workarounds 2026-02-03 03:00:28.496939 | orchestrator | 2026-02-03 03:00:28 | INFO  | Trying to run play workarounds in environment custom 2026-02-03 03:00:38.743844 | orchestrator | 2026-02-03 03:00:38 | INFO  | Task 1e9f614f-f53d-44d7-bf42-2ba1c2dce054 (workarounds) was prepared for execution. 2026-02-03 03:00:38.743979 | orchestrator | 2026-02-03 03:00:38 | INFO  | It takes a moment until task 1e9f614f-f53d-44d7-bf42-2ba1c2dce054 (workarounds) has been started and output is visible here. 2026-02-03 03:01:05.030094 | orchestrator | 2026-02-03 03:01:05.030204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:01:05.030215 | orchestrator | 2026-02-03 03:01:05.030222 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-03 03:01:05.030229 | orchestrator | Tuesday 03 February 2026 03:00:42 +0000 (0:00:00.144) 0:00:00.144 ****** 2026-02-03 03:01:05.030235 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-03 03:01:05.030242 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-03 03:01:05.030248 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-03 03:01:05.030253 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-03 03:01:05.030259 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-03 03:01:05.030265 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-03 03:01:05.030270 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-03 03:01:05.030276 | orchestrator | 2026-02-03 03:01:05.030281 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-03 03:01:05.030286 | orchestrator | 2026-02-03 03:01:05.030292 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-03 03:01:05.030297 | orchestrator | Tuesday 03 February 2026 03:00:43 +0000 (0:00:00.826) 0:00:00.970 ****** 2026-02-03 03:01:05.030303 | orchestrator | ok: [testbed-manager] 2026-02-03 03:01:05.030331 | orchestrator | 2026-02-03 03:01:05.030337 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-03 03:01:05.030342 | orchestrator | 2026-02-03 03:01:05.030360 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-03 03:01:05.030366 | orchestrator | Tuesday 03 February 2026 03:00:46 +0000 (0:00:02.497) 0:00:03.468 ****** 2026-02-03 03:01:05.030378 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:01:05.030384 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:01:05.030389 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:01:05.030394 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:01:05.030400 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:01:05.030405 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:01:05.030410 | orchestrator | 2026-02-03 03:01:05.030416 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-03 03:01:05.030421 | orchestrator | 2026-02-03 03:01:05.030427 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-03 03:01:05.030443 | orchestrator | Tuesday 03 February 2026 03:00:48 +0000 (0:00:01.963) 0:00:05.432 ****** 2026-02-03 03:01:05.030450 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-03 03:01:05.030456 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-03 03:01:05.030462 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-03 03:01:05.030467 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-03 03:01:05.030473 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-03 03:01:05.030478 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-03 03:01:05.030484 | orchestrator | 2026-02-03 03:01:05.030489 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-03 03:01:05.030494 | orchestrator | Tuesday 03 February 2026 03:00:49 +0000 (0:00:01.563) 0:00:06.995 ****** 2026-02-03 03:01:05.030500 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:01:05.030505 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:01:05.030511 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:01:05.030516 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:01:05.030522 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:01:05.030527 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:01:05.030532 | orchestrator | 2026-02-03 03:01:05.030538 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-03 03:01:05.030543 | orchestrator | Tuesday 03 February 2026 03:00:53 +0000 (0:00:03.941) 0:00:10.936 ****** 2026-02-03 03:01:05.030549 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:01:05.030554 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:01:05.030560 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:01:05.030565 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:01:05.030571 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:01:05.030576 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:01:05.030582 | orchestrator | 2026-02-03 03:01:05.030587 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-03 03:01:05.030593 | orchestrator | 2026-02-03 03:01:05.030598 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-03 03:01:05.030604 | orchestrator | Tuesday 03 February 2026 03:00:54 +0000 (0:00:00.709) 0:00:11.645 ****** 2026-02-03 03:01:05.030630 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:01:05.030638 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:01:05.030643 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:01:05.030649 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:01:05.030654 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:01:05.030659 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:01:05.030671 | orchestrator | changed: [testbed-manager] 2026-02-03 03:01:05.030676 | orchestrator | 2026-02-03 03:01:05.030682 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-03 03:01:05.030687 | orchestrator | Tuesday 03 February 2026 03:00:56 +0000 (0:00:01.691) 0:00:13.337 ****** 2026-02-03 03:01:05.030693 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:01:05.030698 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:01:05.030703 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:01:05.030709 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:01:05.030714 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:01:05.030720 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:01:05.030737 | orchestrator | changed: [testbed-manager] 2026-02-03 03:01:05.030743 | orchestrator | 2026-02-03 03:01:05.030749 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-03 03:01:05.030754 | orchestrator | Tuesday 03 February 2026 03:00:57 +0000 (0:00:01.660) 0:00:14.997 ****** 2026-02-03 03:01:05.030760 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:01:05.030765 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:01:05.030771 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:01:05.030776 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:01:05.030782 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:01:05.030787 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:01:05.030793 | orchestrator | ok: [testbed-manager] 2026-02-03 03:01:05.030798 | orchestrator | 2026-02-03 03:01:05.030803 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-03 03:01:05.030809 | orchestrator | Tuesday 03 February 2026 03:00:59 +0000 (0:00:01.622) 0:00:16.620 ****** 2026-02-03 03:01:05.030814 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:01:05.030820 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:01:05.030828 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:01:05.030837 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:01:05.030851 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:01:05.030861 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:01:05.030869 | orchestrator | changed: [testbed-manager] 2026-02-03 03:01:05.030877 | orchestrator | 2026-02-03 03:01:05.030886 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-03 03:01:05.030895 | orchestrator | Tuesday 03 February 2026 03:01:01 +0000 (0:00:01.923) 0:00:18.544 ****** 2026-02-03 03:01:05.030904 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:01:05.030914 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:01:05.030924 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:01:05.030930 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:01:05.030936 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:01:05.030941 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:01:05.030947 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:01:05.030952 | orchestrator | 2026-02-03 03:01:05.030957 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-03 03:01:05.030963 | orchestrator | 2026-02-03 03:01:05.030968 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-03 03:01:05.030974 | orchestrator | Tuesday 03 February 2026 03:01:02 +0000 (0:00:00.692) 0:00:19.236 ****** 2026-02-03 03:01:05.030979 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:01:05.030984 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:01:05.030990 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:01:05.030995 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:01:05.031001 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:01:05.031011 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:01:05.031016 | orchestrator | ok: [testbed-manager] 2026-02-03 03:01:05.031022 | orchestrator | 2026-02-03 03:01:05.031027 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:01:05.031034 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:01:05.031041 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:05.031053 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:05.031058 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:05.031064 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:05.031070 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:05.031075 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:05.031081 | orchestrator | 2026-02-03 03:01:05.031086 | orchestrator | 2026-02-03 03:01:05.031092 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:01:05.031097 | orchestrator | Tuesday 03 February 2026 03:01:04 +0000 (0:00:02.981) 0:00:22.217 ****** 2026-02-03 03:01:05.031102 | orchestrator | =============================================================================== 2026-02-03 03:01:05.031108 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.94s 2026-02-03 03:01:05.031113 | orchestrator | Install python3-docker -------------------------------------------------- 2.98s 2026-02-03 03:01:05.031119 | orchestrator | Apply netplan configuration --------------------------------------------- 2.50s 2026-02-03 03:01:05.031124 | orchestrator | Apply netplan configuration --------------------------------------------- 1.96s 2026-02-03 03:01:05.031130 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.92s 2026-02-03 03:01:05.031135 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.69s 2026-02-03 03:01:05.031141 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.66s 2026-02-03 03:01:05.031146 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.62s 2026-02-03 03:01:05.031151 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.56s 2026-02-03 03:01:05.031157 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2026-02-03 03:01:05.031162 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2026-02-03 03:01:05.031172 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.69s 2026-02-03 03:01:05.737961 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-03 03:01:17.911325 | orchestrator | 2026-02-03 03:01:17 | INFO  | Task f6eb4e65-064c-465d-bd9d-783f0a8e4a58 (reboot) was prepared for execution. 2026-02-03 03:01:17.911408 | orchestrator | 2026-02-03 03:01:17 | INFO  | It takes a moment until task f6eb4e65-064c-465d-bd9d-783f0a8e4a58 (reboot) has been started and output is visible here. 2026-02-03 03:01:28.222273 | orchestrator | 2026-02-03 03:01:28.222391 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-03 03:01:28.222408 | orchestrator | 2026-02-03 03:01:28.222420 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-03 03:01:28.222433 | orchestrator | Tuesday 03 February 2026 03:01:22 +0000 (0:00:00.217) 0:00:00.217 ****** 2026-02-03 03:01:28.222445 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:01:28.222457 | orchestrator | 2026-02-03 03:01:28.222468 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-03 03:01:28.222480 | orchestrator | Tuesday 03 February 2026 03:01:22 +0000 (0:00:00.109) 0:00:00.327 ****** 2026-02-03 03:01:28.222491 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:01:28.222503 | orchestrator | 2026-02-03 03:01:28.222514 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-03 03:01:28.222551 | orchestrator | Tuesday 03 February 2026 03:01:23 +0000 (0:00:00.883) 0:00:01.210 ****** 2026-02-03 03:01:28.222563 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:01:28.222574 | orchestrator | 2026-02-03 03:01:28.222586 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-03 03:01:28.222597 | orchestrator | 2026-02-03 03:01:28.222608 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-03 03:01:28.222619 | orchestrator | Tuesday 03 February 2026 03:01:23 +0000 (0:00:00.113) 0:00:01.324 ****** 2026-02-03 03:01:28.222630 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:01:28.222705 | orchestrator | 2026-02-03 03:01:28.222716 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-03 03:01:28.222727 | orchestrator | Tuesday 03 February 2026 03:01:23 +0000 (0:00:00.108) 0:00:01.432 ****** 2026-02-03 03:01:28.222738 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:01:28.222749 | orchestrator | 2026-02-03 03:01:28.222760 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-03 03:01:28.222786 | orchestrator | Tuesday 03 February 2026 03:01:24 +0000 (0:00:00.673) 0:00:02.106 ****** 2026-02-03 03:01:28.222797 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:01:28.222811 | orchestrator | 2026-02-03 03:01:28.222823 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-03 03:01:28.222837 | orchestrator | 2026-02-03 03:01:28.222850 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-03 03:01:28.222863 | orchestrator | Tuesday 03 February 2026 03:01:24 +0000 (0:00:00.115) 0:00:02.222 ****** 2026-02-03 03:01:28.222876 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:01:28.222888 | orchestrator | 2026-02-03 03:01:28.222902 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-03 03:01:28.222915 | orchestrator | Tuesday 03 February 2026 03:01:24 +0000 (0:00:00.220) 0:00:02.442 ****** 2026-02-03 03:01:28.222929 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:01:28.222943 | orchestrator | 2026-02-03 03:01:28.222955 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-03 03:01:28.222968 | orchestrator | Tuesday 03 February 2026 03:01:25 +0000 (0:00:00.665) 0:00:03.107 ****** 2026-02-03 03:01:28.222981 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:01:28.222993 | orchestrator | 2026-02-03 03:01:28.223006 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-03 03:01:28.223019 | orchestrator | 2026-02-03 03:01:28.223032 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-03 03:01:28.223051 | orchestrator | Tuesday 03 February 2026 03:01:25 +0000 (0:00:00.130) 0:00:03.238 ****** 2026-02-03 03:01:28.223075 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:01:28.223101 | orchestrator | 2026-02-03 03:01:28.223118 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-03 03:01:28.223136 | orchestrator | Tuesday 03 February 2026 03:01:25 +0000 (0:00:00.106) 0:00:03.345 ****** 2026-02-03 03:01:28.223152 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:01:28.223170 | orchestrator | 2026-02-03 03:01:28.223185 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-03 03:01:28.223204 | orchestrator | Tuesday 03 February 2026 03:01:26 +0000 (0:00:00.689) 0:00:04.034 ****** 2026-02-03 03:01:28.223222 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:01:28.223240 | orchestrator | 2026-02-03 03:01:28.223257 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-03 03:01:28.223276 | orchestrator | 2026-02-03 03:01:28.223296 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-03 03:01:28.223307 | orchestrator | Tuesday 03 February 2026 03:01:26 +0000 (0:00:00.120) 0:00:04.154 ****** 2026-02-03 03:01:28.223318 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:01:28.223329 | orchestrator | 2026-02-03 03:01:28.223340 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-03 03:01:28.223364 | orchestrator | Tuesday 03 February 2026 03:01:26 +0000 (0:00:00.090) 0:00:04.245 ****** 2026-02-03 03:01:28.223375 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:01:28.223385 | orchestrator | 2026-02-03 03:01:28.223396 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-03 03:01:28.223407 | orchestrator | Tuesday 03 February 2026 03:01:26 +0000 (0:00:00.672) 0:00:04.918 ****** 2026-02-03 03:01:28.223418 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:01:28.223430 | orchestrator | 2026-02-03 03:01:28.223441 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-03 03:01:28.223452 | orchestrator | 2026-02-03 03:01:28.223463 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-03 03:01:28.223473 | orchestrator | Tuesday 03 February 2026 03:01:27 +0000 (0:00:00.121) 0:00:05.039 ****** 2026-02-03 03:01:28.223484 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:01:28.223495 | orchestrator | 2026-02-03 03:01:28.223506 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-03 03:01:28.223517 | orchestrator | Tuesday 03 February 2026 03:01:27 +0000 (0:00:00.114) 0:00:05.154 ****** 2026-02-03 03:01:28.223528 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:01:28.223539 | orchestrator | 2026-02-03 03:01:28.223549 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-03 03:01:28.223561 | orchestrator | Tuesday 03 February 2026 03:01:27 +0000 (0:00:00.705) 0:00:05.859 ****** 2026-02-03 03:01:28.223591 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:01:28.223603 | orchestrator | 2026-02-03 03:01:28.223614 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:01:28.223626 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:28.223682 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:28.223694 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:28.223705 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:28.223716 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:28.223727 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:01:28.223738 | orchestrator | 2026-02-03 03:01:28.223749 | orchestrator | 2026-02-03 03:01:28.223760 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:01:28.223771 | orchestrator | Tuesday 03 February 2026 03:01:27 +0000 (0:00:00.038) 0:00:05.897 ****** 2026-02-03 03:01:28.223790 | orchestrator | =============================================================================== 2026-02-03 03:01:28.223801 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.29s 2026-02-03 03:01:28.223812 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2026-02-03 03:01:28.223823 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2026-02-03 03:01:28.542249 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-03 03:01:40.804428 | orchestrator | 2026-02-03 03:01:40 | INFO  | Task 205c8432-f907-42b2-bdff-5f92e292a14e (wait-for-connection) was prepared for execution. 2026-02-03 03:01:40.804508 | orchestrator | 2026-02-03 03:01:40 | INFO  | It takes a moment until task 205c8432-f907-42b2-bdff-5f92e292a14e (wait-for-connection) has been started and output is visible here. 2026-02-03 03:01:57.890142 | orchestrator | 2026-02-03 03:01:57.890239 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-03 03:01:57.890251 | orchestrator | 2026-02-03 03:01:57.890259 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-03 03:01:57.890267 | orchestrator | Tuesday 03 February 2026 03:01:45 +0000 (0:00:00.256) 0:00:00.256 ****** 2026-02-03 03:01:57.890274 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:01:57.890282 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:01:57.890289 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:01:57.890296 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:01:57.890302 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:01:57.890309 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:01:57.890315 | orchestrator | 2026-02-03 03:01:57.890322 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:01:57.890330 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:01:57.890338 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:01:57.890345 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:01:57.890352 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:01:57.890358 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:01:57.890366 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:01:57.890378 | orchestrator | 2026-02-03 03:01:57.890390 | orchestrator | 2026-02-03 03:01:57.890401 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:01:57.890412 | orchestrator | Tuesday 03 February 2026 03:01:57 +0000 (0:00:11.655) 0:00:11.913 ****** 2026-02-03 03:01:57.890424 | orchestrator | =============================================================================== 2026-02-03 03:01:57.890435 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.66s 2026-02-03 03:01:58.257635 | orchestrator | + osism apply hddtemp 2026-02-03 03:02:10.506330 | orchestrator | 2026-02-03 03:02:10 | INFO  | Task 473b4774-c2f6-4cc0-a9f7-23487f296e51 (hddtemp) was prepared for execution. 2026-02-03 03:02:10.506398 | orchestrator | 2026-02-03 03:02:10 | INFO  | It takes a moment until task 473b4774-c2f6-4cc0-a9f7-23487f296e51 (hddtemp) has been started and output is visible here. 2026-02-03 03:02:40.514071 | orchestrator | 2026-02-03 03:02:40.514151 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-03 03:02:40.514158 | orchestrator | 2026-02-03 03:02:40.514163 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-03 03:02:40.514167 | orchestrator | Tuesday 03 February 2026 03:02:14 +0000 (0:00:00.266) 0:00:00.266 ****** 2026-02-03 03:02:40.514172 | orchestrator | ok: [testbed-manager] 2026-02-03 03:02:40.514177 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:02:40.514181 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:02:40.514186 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:02:40.514199 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:02:40.514204 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:02:40.514214 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:02:40.514219 | orchestrator | 2026-02-03 03:02:40.514223 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-03 03:02:40.514227 | orchestrator | Tuesday 03 February 2026 03:02:15 +0000 (0:00:00.750) 0:00:01.017 ****** 2026-02-03 03:02:40.514233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:02:40.514259 | orchestrator | 2026-02-03 03:02:40.514266 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-03 03:02:40.514273 | orchestrator | Tuesday 03 February 2026 03:02:16 +0000 (0:00:01.249) 0:00:02.266 ****** 2026-02-03 03:02:40.514282 | orchestrator | ok: [testbed-manager] 2026-02-03 03:02:40.514290 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:02:40.514296 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:02:40.514302 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:02:40.514309 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:02:40.514315 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:02:40.514322 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:02:40.514327 | orchestrator | 2026-02-03 03:02:40.514333 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-03 03:02:40.514352 | orchestrator | Tuesday 03 February 2026 03:02:18 +0000 (0:00:02.119) 0:00:04.386 ****** 2026-02-03 03:02:40.514359 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:02:40.514366 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:02:40.514373 | orchestrator | changed: [testbed-manager] 2026-02-03 03:02:40.514379 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:02:40.514384 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:02:40.514390 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:02:40.514396 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:02:40.514402 | orchestrator | 2026-02-03 03:02:40.514408 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-03 03:02:40.514415 | orchestrator | Tuesday 03 February 2026 03:02:20 +0000 (0:00:01.276) 0:00:05.662 ****** 2026-02-03 03:02:40.514421 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:02:40.514427 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:02:40.514433 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:02:40.514437 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:02:40.514441 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:02:40.514445 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:02:40.514449 | orchestrator | ok: [testbed-manager] 2026-02-03 03:02:40.514453 | orchestrator | 2026-02-03 03:02:40.514457 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-03 03:02:40.514461 | orchestrator | Tuesday 03 February 2026 03:02:21 +0000 (0:00:01.201) 0:00:06.863 ****** 2026-02-03 03:02:40.514464 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:02:40.514468 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:02:40.514472 | orchestrator | changed: [testbed-manager] 2026-02-03 03:02:40.514476 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:02:40.514480 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:02:40.514484 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:02:40.514488 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:02:40.514491 | orchestrator | 2026-02-03 03:02:40.514495 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-03 03:02:40.514499 | orchestrator | Tuesday 03 February 2026 03:02:22 +0000 (0:00:01.029) 0:00:07.892 ****** 2026-02-03 03:02:40.514503 | orchestrator | changed: [testbed-manager] 2026-02-03 03:02:40.514507 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:02:40.514511 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:02:40.514515 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:02:40.514519 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:02:40.514522 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:02:40.514526 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:02:40.514530 | orchestrator | 2026-02-03 03:02:40.514534 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-03 03:02:40.514538 | orchestrator | Tuesday 03 February 2026 03:02:36 +0000 (0:00:14.259) 0:00:22.152 ****** 2026-02-03 03:02:40.514542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:02:40.514552 | orchestrator | 2026-02-03 03:02:40.514556 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-03 03:02:40.514560 | orchestrator | Tuesday 03 February 2026 03:02:38 +0000 (0:00:01.365) 0:00:23.518 ****** 2026-02-03 03:02:40.514564 | orchestrator | changed: [testbed-manager] 2026-02-03 03:02:40.514568 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:02:40.514572 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:02:40.514576 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:02:40.514580 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:02:40.514584 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:02:40.514588 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:02:40.514592 | orchestrator | 2026-02-03 03:02:40.514596 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:02:40.514600 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:02:40.514616 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:02:40.514622 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:02:40.514627 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:02:40.514631 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:02:40.514636 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:02:40.514641 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:02:40.514645 | orchestrator | 2026-02-03 03:02:40.514650 | orchestrator | 2026-02-03 03:02:40.514654 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:02:40.514659 | orchestrator | Tuesday 03 February 2026 03:02:40 +0000 (0:00:01.966) 0:00:25.485 ****** 2026-02-03 03:02:40.514664 | orchestrator | =============================================================================== 2026-02-03 03:02:40.514669 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.26s 2026-02-03 03:02:40.514674 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.12s 2026-02-03 03:02:40.514678 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.97s 2026-02-03 03:02:40.514686 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.37s 2026-02-03 03:02:40.514690 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.28s 2026-02-03 03:02:40.514695 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2026-02-03 03:02:40.514699 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.20s 2026-02-03 03:02:40.514704 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 1.03s 2026-02-03 03:02:40.514709 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2026-02-03 03:02:40.871821 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-03 03:02:40.932442 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 03:02:40.932541 | orchestrator | + sudo systemctl restart manager.service 2026-02-03 03:02:58.711930 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-03 03:02:58.712020 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-03 03:02:58.712029 | orchestrator | + local max_attempts=60 2026-02-03 03:02:58.712036 | orchestrator | + local name=ceph-ansible 2026-02-03 03:02:58.712043 | orchestrator | + local attempt_num=1 2026-02-03 03:02:58.712049 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:02:58.750165 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:02:58.750253 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:02:58.750266 | orchestrator | + sleep 5 2026-02-03 03:03:03.758392 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:03.826208 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:03.826310 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:03.826330 | orchestrator | + sleep 5 2026-02-03 03:03:08.829524 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:08.867815 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:08.867890 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:08.867899 | orchestrator | + sleep 5 2026-02-03 03:03:13.873445 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:13.917024 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:13.917121 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:13.917136 | orchestrator | + sleep 5 2026-02-03 03:03:18.922351 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:18.950288 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:18.950378 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:18.950391 | orchestrator | + sleep 5 2026-02-03 03:03:23.956125 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:23.997476 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:23.997632 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:23.997651 | orchestrator | + sleep 5 2026-02-03 03:03:29.003000 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:29.045528 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:29.045617 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:29.045629 | orchestrator | + sleep 5 2026-02-03 03:03:34.051456 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:34.094820 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:34.094914 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:34.094923 | orchestrator | + sleep 5 2026-02-03 03:03:39.097489 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:39.136725 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:39.136830 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:39.136842 | orchestrator | + sleep 5 2026-02-03 03:03:44.140458 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:44.174347 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:44.174422 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:44.174430 | orchestrator | + sleep 5 2026-02-03 03:03:49.181928 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:49.223734 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:49.223834 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:49.223845 | orchestrator | + sleep 5 2026-02-03 03:03:54.230287 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:54.271775 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:54.271926 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:54.271947 | orchestrator | + sleep 5 2026-02-03 03:03:59.276629 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:03:59.307907 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-03 03:03:59.308009 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-03 03:03:59.308026 | orchestrator | + sleep 5 2026-02-03 03:04:04.312002 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-03 03:04:04.347637 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:04:04.347731 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-03 03:04:04.347743 | orchestrator | + local max_attempts=60 2026-02-03 03:04:04.347752 | orchestrator | + local name=kolla-ansible 2026-02-03 03:04:04.347761 | orchestrator | + local attempt_num=1 2026-02-03 03:04:04.348596 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-03 03:04:04.390584 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:04:04.390679 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-03 03:04:04.390726 | orchestrator | + local max_attempts=60 2026-02-03 03:04:04.390740 | orchestrator | + local name=osism-ansible 2026-02-03 03:04:04.390751 | orchestrator | + local attempt_num=1 2026-02-03 03:04:04.391862 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-03 03:04:04.419926 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-03 03:04:04.420022 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-03 03:04:04.420039 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-03 03:04:04.606378 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-03 03:04:04.780319 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-03 03:04:04.947748 | orchestrator | ARA in osism-ansible already disabled. 2026-02-03 03:04:05.124621 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-03 03:04:05.125169 | orchestrator | + osism apply gather-facts 2026-02-03 03:04:17.399781 | orchestrator | 2026-02-03 03:04:17 | INFO  | Task e45104cb-81f6-4bad-b2bf-459a17156b5c (gather-facts) was prepared for execution. 2026-02-03 03:04:17.399907 | orchestrator | 2026-02-03 03:04:17 | INFO  | It takes a moment until task e45104cb-81f6-4bad-b2bf-459a17156b5c (gather-facts) has been started and output is visible here. 2026-02-03 03:04:31.543051 | orchestrator | 2026-02-03 03:04:31.543159 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-03 03:04:31.543175 | orchestrator | 2026-02-03 03:04:31.543187 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-03 03:04:31.543194 | orchestrator | Tuesday 03 February 2026 03:04:21 +0000 (0:00:00.228) 0:00:00.228 ****** 2026-02-03 03:04:31.543201 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:04:31.543208 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:04:31.543214 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:04:31.543220 | orchestrator | ok: [testbed-manager] 2026-02-03 03:04:31.543226 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:04:31.543232 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:04:31.543238 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:04:31.543244 | orchestrator | 2026-02-03 03:04:31.543250 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-03 03:04:31.543256 | orchestrator | 2026-02-03 03:04:31.543262 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-03 03:04:31.543267 | orchestrator | Tuesday 03 February 2026 03:04:30 +0000 (0:00:08.749) 0:00:08.978 ****** 2026-02-03 03:04:31.543273 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:04:31.543280 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:04:31.543286 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:04:31.543292 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:04:31.543298 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:04:31.543303 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:04:31.543309 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:04:31.543315 | orchestrator | 2026-02-03 03:04:31.543321 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:04:31.543327 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:04:31.543335 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:04:31.543340 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:04:31.543346 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:04:31.543352 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:04:31.543358 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:04:31.543386 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:04:31.543392 | orchestrator | 2026-02-03 03:04:31.543398 | orchestrator | 2026-02-03 03:04:31.543404 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:04:31.543410 | orchestrator | Tuesday 03 February 2026 03:04:31 +0000 (0:00:00.524) 0:00:09.502 ****** 2026-02-03 03:04:31.543416 | orchestrator | =============================================================================== 2026-02-03 03:04:31.543421 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.75s 2026-02-03 03:04:31.543427 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-02-03 03:04:31.861425 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-03 03:04:31.876242 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-03 03:04:31.889288 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-03 03:04:31.902985 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-03 03:04:31.924337 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-03 03:04:31.945445 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-03 03:04:31.968014 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-03 03:04:31.986903 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-03 03:04:32.013711 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-03 03:04:32.028249 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-03 03:04:32.040239 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-03 03:04:32.054171 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-03 03:04:32.068450 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-03 03:04:32.086438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-03 03:04:32.109900 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-03 03:04:32.121797 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-03 03:04:32.135310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-03 03:04:32.144633 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-03 03:04:32.159343 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-03 03:04:32.175295 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-03 03:04:32.188140 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-03 03:04:32.202187 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-03 03:04:32.213974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-03 03:04:32.225811 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-03 03:04:32.360664 | orchestrator | ok: Runtime: 0:24:56.724112 2026-02-03 03:04:32.447473 | 2026-02-03 03:04:32.447646 | TASK [Deploy services] 2026-02-03 03:04:33.164373 | orchestrator | 2026-02-03 03:04:33.164602 | orchestrator | # DEPLOY SERVICES 2026-02-03 03:04:33.164643 | orchestrator | 2026-02-03 03:04:33.164668 | orchestrator | + set -e 2026-02-03 03:04:33.164691 | orchestrator | + echo 2026-02-03 03:04:33.164713 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-03 03:04:33.164737 | orchestrator | + echo 2026-02-03 03:04:33.164798 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 03:04:33.164831 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 03:04:33.164882 | orchestrator | ++ INTERACTIVE=false 2026-02-03 03:04:33.164904 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 03:04:33.164938 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 03:04:33.164959 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 03:04:33.165013 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 03:04:33.165033 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 03:04:33.165061 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 03:04:33.165079 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 03:04:33.165103 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 03:04:33.165124 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 03:04:33.165147 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 03:04:33.165166 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 03:04:33.165184 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 03:04:33.165204 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 03:04:33.165224 | orchestrator | ++ export ARA=false 2026-02-03 03:04:33.165243 | orchestrator | ++ ARA=false 2026-02-03 03:04:33.165261 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 03:04:33.165280 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 03:04:33.165298 | orchestrator | ++ export TEMPEST=false 2026-02-03 03:04:33.165316 | orchestrator | ++ TEMPEST=false 2026-02-03 03:04:33.165336 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 03:04:33.165353 | orchestrator | ++ IS_ZUUL=true 2026-02-03 03:04:33.165372 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 03:04:33.165390 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 03:04:33.165409 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 03:04:33.165427 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 03:04:33.165447 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 03:04:33.165466 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 03:04:33.165484 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 03:04:33.165503 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 03:04:33.165522 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 03:04:33.165551 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 03:04:33.165570 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-03 03:04:33.174654 | orchestrator | + set -e 2026-02-03 03:04:33.175875 | orchestrator | 2026-02-03 03:04:33.175935 | orchestrator | # PULL IMAGES 2026-02-03 03:04:33.175946 | orchestrator | 2026-02-03 03:04:33.175952 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 03:04:33.175974 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 03:04:33.175983 | orchestrator | ++ INTERACTIVE=false 2026-02-03 03:04:33.175988 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 03:04:33.175993 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 03:04:33.175998 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 03:04:33.176002 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 03:04:33.176007 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 03:04:33.176012 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 03:04:33.176017 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 03:04:33.176021 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 03:04:33.176026 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 03:04:33.176031 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 03:04:33.176036 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 03:04:33.176041 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 03:04:33.176045 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 03:04:33.176050 | orchestrator | ++ export ARA=false 2026-02-03 03:04:33.176054 | orchestrator | ++ ARA=false 2026-02-03 03:04:33.176061 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 03:04:33.176065 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 03:04:33.176070 | orchestrator | ++ export TEMPEST=false 2026-02-03 03:04:33.176074 | orchestrator | ++ TEMPEST=false 2026-02-03 03:04:33.176078 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 03:04:33.176083 | orchestrator | ++ IS_ZUUL=true 2026-02-03 03:04:33.176087 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 03:04:33.176092 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 03:04:33.176096 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 03:04:33.176101 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 03:04:33.176106 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 03:04:33.176110 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 03:04:33.176136 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 03:04:33.176140 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 03:04:33.176145 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 03:04:33.176150 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 03:04:33.176155 | orchestrator | + echo 2026-02-03 03:04:33.176159 | orchestrator | + echo '# PULL IMAGES' 2026-02-03 03:04:33.176164 | orchestrator | + echo 2026-02-03 03:04:33.176175 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-03 03:04:33.238805 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 03:04:33.238944 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-03 03:04:35.140925 | orchestrator | 2026-02-03 03:04:35 | INFO  | Trying to run play pull-images in environment custom 2026-02-03 03:04:45.335775 | orchestrator | 2026-02-03 03:04:45 | INFO  | Task 350c1cdd-2751-4f8b-84af-8b779f6643c5 (pull-images) was prepared for execution. 2026-02-03 03:04:45.335944 | orchestrator | 2026-02-03 03:04:45 | INFO  | Task 350c1cdd-2751-4f8b-84af-8b779f6643c5 is running in background. No more output. Check ARA for logs. 2026-02-03 03:04:45.710937 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-03 03:04:57.921153 | orchestrator | 2026-02-03 03:04:57 | INFO  | Task a11f48b7-ce66-4e7b-beec-f63182a6c0bd (cgit) was prepared for execution. 2026-02-03 03:04:57.921297 | orchestrator | 2026-02-03 03:04:57 | INFO  | Task a11f48b7-ce66-4e7b-beec-f63182a6c0bd is running in background. No more output. Check ARA for logs. 2026-02-03 03:05:10.688084 | orchestrator | 2026-02-03 03:05:10 | INFO  | Task bfcda60d-5224-40ed-8687-b934f5b312c8 (dotfiles) was prepared for execution. 2026-02-03 03:05:10.688196 | orchestrator | 2026-02-03 03:05:10 | INFO  | Task bfcda60d-5224-40ed-8687-b934f5b312c8 is running in background. No more output. Check ARA for logs. 2026-02-03 03:05:23.322353 | orchestrator | 2026-02-03 03:05:23 | INFO  | Task f9da617b-1a4c-49a3-984c-d1bf25e4091a (homer) was prepared for execution. 2026-02-03 03:05:23.322462 | orchestrator | 2026-02-03 03:05:23 | INFO  | Task f9da617b-1a4c-49a3-984c-d1bf25e4091a is running in background. No more output. Check ARA for logs. 2026-02-03 03:05:35.977728 | orchestrator | 2026-02-03 03:05:35 | INFO  | Task 9c8ea311-9bf6-43a0-850e-258475809fa6 (phpmyadmin) was prepared for execution. 2026-02-03 03:05:35.977822 | orchestrator | 2026-02-03 03:05:35 | INFO  | Task 9c8ea311-9bf6-43a0-850e-258475809fa6 is running in background. No more output. Check ARA for logs. 2026-02-03 03:05:48.482469 | orchestrator | 2026-02-03 03:05:48 | INFO  | Task 10b43872-f67f-47f1-8a1c-156c84d4c992 (sosreport) was prepared for execution. 2026-02-03 03:05:48.482565 | orchestrator | 2026-02-03 03:05:48 | INFO  | Task 10b43872-f67f-47f1-8a1c-156c84d4c992 is running in background. No more output. Check ARA for logs. 2026-02-03 03:05:48.810405 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-03 03:05:48.816854 | orchestrator | + set -e 2026-02-03 03:05:48.816963 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 03:05:48.816975 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 03:05:48.816982 | orchestrator | ++ INTERACTIVE=false 2026-02-03 03:05:48.816990 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 03:05:48.816996 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 03:05:48.817002 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 03:05:48.817007 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 03:05:48.817013 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 03:05:48.817018 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 03:05:48.817024 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 03:05:48.817034 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 03:05:48.817043 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 03:05:48.817052 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 03:05:48.817061 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 03:05:48.817070 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 03:05:48.817080 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 03:05:48.817089 | orchestrator | ++ export ARA=false 2026-02-03 03:05:48.817099 | orchestrator | ++ ARA=false 2026-02-03 03:05:48.817107 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 03:05:48.817145 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 03:05:48.817156 | orchestrator | ++ export TEMPEST=false 2026-02-03 03:05:48.817166 | orchestrator | ++ TEMPEST=false 2026-02-03 03:05:48.817176 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 03:05:48.817184 | orchestrator | ++ IS_ZUUL=true 2026-02-03 03:05:48.817208 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 03:05:48.817223 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 03:05:48.817233 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 03:05:48.817243 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 03:05:48.817253 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 03:05:48.817261 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 03:05:48.817270 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 03:05:48.817279 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 03:05:48.817289 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 03:05:48.817298 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 03:05:48.817318 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-03 03:05:48.870480 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 03:05:48.870574 | orchestrator | + osism apply frr 2026-02-03 03:06:01.131360 | orchestrator | 2026-02-03 03:06:01 | INFO  | Task b904005d-1317-4cf0-863c-30bec7de1ff2 (frr) was prepared for execution. 2026-02-03 03:06:01.131455 | orchestrator | 2026-02-03 03:06:01 | INFO  | It takes a moment until task b904005d-1317-4cf0-863c-30bec7de1ff2 (frr) has been started and output is visible here. 2026-02-03 03:06:34.357350 | orchestrator | 2026-02-03 03:06:34.357471 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-03 03:06:34.357493 | orchestrator | 2026-02-03 03:06:34.357508 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-03 03:06:34.357531 | orchestrator | Tuesday 03 February 2026 03:06:08 +0000 (0:00:00.350) 0:00:00.350 ****** 2026-02-03 03:06:34.357545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 03:06:34.357560 | orchestrator | 2026-02-03 03:06:34.357573 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-03 03:06:34.357587 | orchestrator | Tuesday 03 February 2026 03:06:08 +0000 (0:00:00.252) 0:00:00.602 ****** 2026-02-03 03:06:34.357600 | orchestrator | changed: [testbed-manager] 2026-02-03 03:06:34.357614 | orchestrator | 2026-02-03 03:06:34.357628 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-03 03:06:34.357644 | orchestrator | Tuesday 03 February 2026 03:06:10 +0000 (0:00:01.893) 0:00:02.495 ****** 2026-02-03 03:06:34.357657 | orchestrator | changed: [testbed-manager] 2026-02-03 03:06:34.357669 | orchestrator | 2026-02-03 03:06:34.357683 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-03 03:06:34.357696 | orchestrator | Tuesday 03 February 2026 03:06:23 +0000 (0:00:12.612) 0:00:15.108 ****** 2026-02-03 03:06:34.357710 | orchestrator | ok: [testbed-manager] 2026-02-03 03:06:34.357722 | orchestrator | 2026-02-03 03:06:34.357734 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-03 03:06:34.357746 | orchestrator | Tuesday 03 February 2026 03:06:24 +0000 (0:00:01.205) 0:00:16.314 ****** 2026-02-03 03:06:34.357758 | orchestrator | changed: [testbed-manager] 2026-02-03 03:06:34.357770 | orchestrator | 2026-02-03 03:06:34.357781 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-03 03:06:34.357794 | orchestrator | Tuesday 03 February 2026 03:06:25 +0000 (0:00:01.153) 0:00:17.467 ****** 2026-02-03 03:06:34.357807 | orchestrator | ok: [testbed-manager] 2026-02-03 03:06:34.357820 | orchestrator | 2026-02-03 03:06:34.357833 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-03 03:06:34.357848 | orchestrator | Tuesday 03 February 2026 03:06:26 +0000 (0:00:01.280) 0:00:18.747 ****** 2026-02-03 03:06:34.357860 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:06:34.357872 | orchestrator | 2026-02-03 03:06:34.357885 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-03 03:06:34.357898 | orchestrator | Tuesday 03 February 2026 03:06:26 +0000 (0:00:00.141) 0:00:18.889 ****** 2026-02-03 03:06:34.357941 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:06:34.357957 | orchestrator | 2026-02-03 03:06:34.358006 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-03 03:06:34.358100 | orchestrator | Tuesday 03 February 2026 03:06:27 +0000 (0:00:00.151) 0:00:19.041 ****** 2026-02-03 03:06:34.358116 | orchestrator | changed: [testbed-manager] 2026-02-03 03:06:34.358128 | orchestrator | 2026-02-03 03:06:34.358141 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-03 03:06:34.358154 | orchestrator | Tuesday 03 February 2026 03:06:28 +0000 (0:00:01.091) 0:00:20.133 ****** 2026-02-03 03:06:34.358167 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-03 03:06:34.358180 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-03 03:06:34.358196 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-03 03:06:34.358209 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-03 03:06:34.358222 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-03 03:06:34.358235 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-03 03:06:34.358248 | orchestrator | 2026-02-03 03:06:34.358261 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-03 03:06:34.358274 | orchestrator | Tuesday 03 February 2026 03:06:30 +0000 (0:00:02.489) 0:00:22.622 ****** 2026-02-03 03:06:34.358287 | orchestrator | ok: [testbed-manager] 2026-02-03 03:06:34.358300 | orchestrator | 2026-02-03 03:06:34.358313 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-03 03:06:34.358326 | orchestrator | Tuesday 03 February 2026 03:06:32 +0000 (0:00:01.773) 0:00:24.396 ****** 2026-02-03 03:06:34.358338 | orchestrator | changed: [testbed-manager] 2026-02-03 03:06:34.358351 | orchestrator | 2026-02-03 03:06:34.358363 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:06:34.358376 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:06:34.358390 | orchestrator | 2026-02-03 03:06:34.358403 | orchestrator | 2026-02-03 03:06:34.358449 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:06:34.358465 | orchestrator | Tuesday 03 February 2026 03:06:33 +0000 (0:00:01.494) 0:00:25.891 ****** 2026-02-03 03:06:34.358479 | orchestrator | =============================================================================== 2026-02-03 03:06:34.358492 | orchestrator | osism.services.frr : Install frr package ------------------------------- 12.61s 2026-02-03 03:06:34.358519 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.49s 2026-02-03 03:06:34.358532 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.89s 2026-02-03 03:06:34.358545 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.77s 2026-02-03 03:06:34.358557 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.49s 2026-02-03 03:06:34.358611 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.28s 2026-02-03 03:06:34.358626 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.21s 2026-02-03 03:06:34.358640 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.15s 2026-02-03 03:06:34.358653 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.09s 2026-02-03 03:06:34.358666 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.25s 2026-02-03 03:06:34.358680 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-02-03 03:06:34.358692 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-02-03 03:06:34.702179 | orchestrator | + osism apply kubernetes 2026-02-03 03:06:36.917848 | orchestrator | 2026-02-03 03:06:36 | INFO  | Task 48643614-b9c0-4c82-a888-ea1108262446 (kubernetes) was prepared for execution. 2026-02-03 03:06:36.918093 | orchestrator | 2026-02-03 03:06:36 | INFO  | It takes a moment until task 48643614-b9c0-4c82-a888-ea1108262446 (kubernetes) has been started and output is visible here. 2026-02-03 03:07:04.590503 | orchestrator | 2026-02-03 03:07:04.590588 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-03 03:07:04.590599 | orchestrator | 2026-02-03 03:07:04.590605 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-03 03:07:04.590612 | orchestrator | Tuesday 03 February 2026 03:06:42 +0000 (0:00:00.438) 0:00:00.438 ****** 2026-02-03 03:07:04.590618 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:07:04.590625 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:07:04.590631 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:07:04.590637 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:07:04.590643 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:07:04.590649 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:07:04.590654 | orchestrator | 2026-02-03 03:07:04.590660 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-03 03:07:04.590665 | orchestrator | Tuesday 03 February 2026 03:06:43 +0000 (0:00:01.011) 0:00:01.450 ****** 2026-02-03 03:07:04.590671 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.590677 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.590682 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.590688 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.590693 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.590699 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.590704 | orchestrator | 2026-02-03 03:07:04.590709 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-03 03:07:04.590717 | orchestrator | Tuesday 03 February 2026 03:06:44 +0000 (0:00:00.769) 0:00:02.220 ****** 2026-02-03 03:07:04.590723 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.590728 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.590733 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.590739 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.590744 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.590750 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.590755 | orchestrator | 2026-02-03 03:07:04.590760 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-03 03:07:04.590766 | orchestrator | Tuesday 03 February 2026 03:06:44 +0000 (0:00:00.811) 0:00:03.031 ****** 2026-02-03 03:07:04.590772 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:07:04.590777 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:07:04.590782 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:07:04.590791 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:07:04.590796 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:07:04.590802 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:07:04.590807 | orchestrator | 2026-02-03 03:07:04.590813 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-03 03:07:04.590818 | orchestrator | Tuesday 03 February 2026 03:06:47 +0000 (0:00:02.741) 0:00:05.773 ****** 2026-02-03 03:07:04.590824 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:07:04.590829 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:07:04.590835 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:07:04.590840 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:07:04.590858 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:07:04.590864 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:07:04.590870 | orchestrator | 2026-02-03 03:07:04.590883 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-03 03:07:04.590890 | orchestrator | Tuesday 03 February 2026 03:06:49 +0000 (0:00:01.931) 0:00:07.705 ****** 2026-02-03 03:07:04.590899 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:07:04.590934 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:07:04.590945 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:07:04.590953 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:07:04.590963 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:07:04.590971 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:07:04.590980 | orchestrator | 2026-02-03 03:07:04.590995 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-03 03:07:04.591021 | orchestrator | Tuesday 03 February 2026 03:06:51 +0000 (0:00:02.132) 0:00:09.837 ****** 2026-02-03 03:07:04.591030 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.591039 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.591048 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.591057 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.591067 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.591076 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.591085 | orchestrator | 2026-02-03 03:07:04.591093 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-03 03:07:04.591099 | orchestrator | Tuesday 03 February 2026 03:06:52 +0000 (0:00:00.553) 0:00:10.390 ****** 2026-02-03 03:07:04.591104 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.591110 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.591115 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.591121 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.591126 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.591131 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.591137 | orchestrator | 2026-02-03 03:07:04.591142 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-03 03:07:04.591148 | orchestrator | Tuesday 03 February 2026 03:06:52 +0000 (0:00:00.743) 0:00:11.134 ****** 2026-02-03 03:07:04.591153 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 03:07:04.591159 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 03:07:04.591164 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.591170 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 03:07:04.591175 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 03:07:04.591181 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.591186 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 03:07:04.591192 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 03:07:04.591197 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.591203 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 03:07:04.591223 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 03:07:04.591229 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.591234 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 03:07:04.591240 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 03:07:04.591245 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.591251 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 03:07:04.591256 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 03:07:04.591262 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.591267 | orchestrator | 2026-02-03 03:07:04.591273 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-03 03:07:04.591278 | orchestrator | Tuesday 03 February 2026 03:06:53 +0000 (0:00:00.636) 0:00:11.770 ****** 2026-02-03 03:07:04.591284 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.591289 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.591295 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.591307 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.591313 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.591318 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.591324 | orchestrator | 2026-02-03 03:07:04.591329 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-03 03:07:04.591336 | orchestrator | Tuesday 03 February 2026 03:06:54 +0000 (0:00:01.175) 0:00:12.946 ****** 2026-02-03 03:07:04.591341 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:07:04.591347 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:07:04.591352 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:07:04.591358 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:07:04.591363 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:07:04.591369 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:07:04.591374 | orchestrator | 2026-02-03 03:07:04.591380 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-03 03:07:04.591385 | orchestrator | Tuesday 03 February 2026 03:06:55 +0000 (0:00:00.812) 0:00:13.758 ****** 2026-02-03 03:07:04.591391 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:07:04.591396 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:07:04.591402 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:07:04.591407 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:07:04.591413 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:07:04.591418 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:07:04.591423 | orchestrator | 2026-02-03 03:07:04.591429 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-03 03:07:04.591435 | orchestrator | Tuesday 03 February 2026 03:07:00 +0000 (0:00:05.114) 0:00:18.873 ****** 2026-02-03 03:07:04.591440 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.591449 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.591455 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.591460 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.591466 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.591471 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.591477 | orchestrator | 2026-02-03 03:07:04.591482 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-03 03:07:04.591488 | orchestrator | Tuesday 03 February 2026 03:07:01 +0000 (0:00:01.052) 0:00:19.925 ****** 2026-02-03 03:07:04.591493 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.591499 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.591504 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.591510 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.591515 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.591521 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.591526 | orchestrator | 2026-02-03 03:07:04.591531 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-03 03:07:04.591538 | orchestrator | Tuesday 03 February 2026 03:07:03 +0000 (0:00:01.318) 0:00:21.243 ****** 2026-02-03 03:07:04.591544 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.591549 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.591555 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.591560 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.591565 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.591571 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.591576 | orchestrator | 2026-02-03 03:07:04.591582 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-03 03:07:04.591587 | orchestrator | Tuesday 03 February 2026 03:07:03 +0000 (0:00:00.600) 0:00:21.844 ****** 2026-02-03 03:07:04.591593 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-03 03:07:04.591603 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-03 03:07:04.591608 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:07:04.591614 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-03 03:07:04.591623 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-03 03:07:04.591629 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:07:04.591634 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-03 03:07:04.591639 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-03 03:07:04.591645 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:07:04.591651 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-03 03:07:04.591656 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-03 03:07:04.591662 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:07:04.591667 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-03 03:07:04.591672 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-03 03:07:04.591678 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:07:04.591683 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-03 03:07:04.591689 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-03 03:07:04.591694 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:07:04.591700 | orchestrator | 2026-02-03 03:07:04.591705 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-03 03:07:04.591714 | orchestrator | Tuesday 03 February 2026 03:07:04 +0000 (0:00:00.922) 0:00:22.767 ****** 2026-02-03 03:08:20.959521 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:08:20.959626 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:08:20.959639 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:08:20.959649 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:08:20.959658 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:08:20.959668 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:08:20.959677 | orchestrator | 2026-02-03 03:08:20.959688 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-03 03:08:20.959699 | orchestrator | Tuesday 03 February 2026 03:07:05 +0000 (0:00:00.595) 0:00:23.362 ****** 2026-02-03 03:08:20.959709 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:08:20.959718 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:08:20.959727 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:08:20.959736 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:08:20.959744 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:08:20.959753 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:08:20.959762 | orchestrator | 2026-02-03 03:08:20.959771 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-03 03:08:20.959780 | orchestrator | 2026-02-03 03:08:20.959789 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-03 03:08:20.959800 | orchestrator | Tuesday 03 February 2026 03:07:06 +0000 (0:00:01.302) 0:00:24.665 ****** 2026-02-03 03:08:20.959809 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:08:20.959818 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:08:20.959827 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:08:20.959835 | orchestrator | 2026-02-03 03:08:20.959844 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-03 03:08:20.959853 | orchestrator | Tuesday 03 February 2026 03:07:07 +0000 (0:00:01.387) 0:00:26.052 ****** 2026-02-03 03:08:20.959862 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:08:20.959871 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:08:20.959880 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:08:20.959888 | orchestrator | 2026-02-03 03:08:20.959897 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-03 03:08:20.959906 | orchestrator | Tuesday 03 February 2026 03:07:09 +0000 (0:00:01.529) 0:00:27.582 ****** 2026-02-03 03:08:20.959915 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:08:20.959924 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:08:20.959932 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:08:20.959942 | orchestrator | 2026-02-03 03:08:20.959951 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-03 03:08:20.959981 | orchestrator | Tuesday 03 February 2026 03:07:10 +0000 (0:00:01.496) 0:00:29.079 ****** 2026-02-03 03:08:20.959991 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:08:20.960000 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:08:20.960008 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:08:20.960017 | orchestrator | 2026-02-03 03:08:20.960026 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-03 03:08:20.960035 | orchestrator | Tuesday 03 February 2026 03:07:12 +0000 (0:00:01.767) 0:00:30.846 ****** 2026-02-03 03:08:20.960044 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:08:20.960053 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:08:20.960062 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:08:20.960071 | orchestrator | 2026-02-03 03:08:20.960100 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-03 03:08:20.960134 | orchestrator | Tuesday 03 February 2026 03:07:12 +0000 (0:00:00.351) 0:00:31.198 ****** 2026-02-03 03:08:20.960144 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:08:20.960152 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:08:20.960161 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:08:20.960170 | orchestrator | 2026-02-03 03:08:20.960179 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-03 03:08:20.960188 | orchestrator | Tuesday 03 February 2026 03:07:13 +0000 (0:00:00.921) 0:00:32.120 ****** 2026-02-03 03:08:20.960197 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:08:20.960205 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:08:20.960214 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:08:20.960223 | orchestrator | 2026-02-03 03:08:20.960232 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-03 03:08:20.960241 | orchestrator | Tuesday 03 February 2026 03:07:15 +0000 (0:00:01.405) 0:00:33.526 ****** 2026-02-03 03:08:20.960250 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:08:20.960258 | orchestrator | 2026-02-03 03:08:20.960267 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-03 03:08:20.960276 | orchestrator | Tuesday 03 February 2026 03:07:15 +0000 (0:00:00.511) 0:00:34.037 ****** 2026-02-03 03:08:20.960285 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:08:20.960294 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:08:20.960303 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:08:20.960312 | orchestrator | 2026-02-03 03:08:20.960321 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-03 03:08:20.960330 | orchestrator | Tuesday 03 February 2026 03:07:17 +0000 (0:00:01.960) 0:00:35.998 ****** 2026-02-03 03:08:20.960346 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:08:20.960361 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:08:20.960376 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:08:20.960397 | orchestrator | 2026-02-03 03:08:20.960413 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-03 03:08:20.960427 | orchestrator | Tuesday 03 February 2026 03:07:18 +0000 (0:00:00.564) 0:00:36.562 ****** 2026-02-03 03:08:20.960442 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:08:20.960456 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:08:20.960470 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:08:20.960485 | orchestrator | 2026-02-03 03:08:20.960498 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-03 03:08:20.960512 | orchestrator | Tuesday 03 February 2026 03:07:19 +0000 (0:00:00.758) 0:00:37.320 ****** 2026-02-03 03:08:20.960526 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:08:20.960539 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:08:20.960553 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:08:20.960567 | orchestrator | 2026-02-03 03:08:20.960581 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-03 03:08:20.960617 | orchestrator | Tuesday 03 February 2026 03:07:20 +0000 (0:00:01.238) 0:00:38.558 ****** 2026-02-03 03:08:20.960636 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:08:20.960666 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:08:20.960680 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:08:20.960696 | orchestrator | 2026-02-03 03:08:20.960711 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-03 03:08:20.960725 | orchestrator | Tuesday 03 February 2026 03:07:20 +0000 (0:00:00.372) 0:00:38.931 ****** 2026-02-03 03:08:20.960740 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:08:20.960755 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:08:20.960768 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:08:20.960777 | orchestrator | 2026-02-03 03:08:20.960786 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-03 03:08:20.960795 | orchestrator | Tuesday 03 February 2026 03:07:21 +0000 (0:00:00.634) 0:00:39.565 ****** 2026-02-03 03:08:20.960804 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:08:20.960813 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:08:20.960827 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:08:20.960848 | orchestrator | 2026-02-03 03:08:20.960875 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-03 03:08:20.960889 | orchestrator | Tuesday 03 February 2026 03:07:22 +0000 (0:00:01.191) 0:00:40.757 ****** 2026-02-03 03:08:20.960903 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:08:20.960916 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:08:20.960930 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:08:20.960943 | orchestrator | 2026-02-03 03:08:20.960957 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-03 03:08:20.960972 | orchestrator | Tuesday 03 February 2026 03:07:25 +0000 (0:00:02.716) 0:00:43.474 ****** 2026-02-03 03:08:20.960985 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:08:20.961001 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:08:20.961016 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:08:20.961035 | orchestrator | 2026-02-03 03:08:20.961047 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-03 03:08:20.961057 | orchestrator | Tuesday 03 February 2026 03:07:25 +0000 (0:00:00.343) 0:00:43.817 ****** 2026-02-03 03:08:20.961067 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-03 03:08:20.961105 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-03 03:08:20.961116 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-03 03:08:20.961125 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-03 03:08:20.961133 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-03 03:08:20.961142 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-03 03:08:20.961151 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-03 03:08:20.961159 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-03 03:08:20.961168 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-03 03:08:20.961177 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-03 03:08:20.961185 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-03 03:08:20.961205 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-03 03:08:20.961213 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-03 03:08:20.961222 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-03 03:08:20.961233 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-03 03:08:20.961253 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:08:20.961272 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:08:20.961286 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:08:20.961299 | orchestrator | 2026-02-03 03:08:20.961319 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-03 03:08:20.961333 | orchestrator | Tuesday 03 February 2026 03:08:19 +0000 (0:00:54.040) 0:01:37.857 ****** 2026-02-03 03:08:20.961347 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:08:20.961360 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:08:20.961374 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:08:20.961389 | orchestrator | 2026-02-03 03:08:20.961403 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-03 03:08:20.961417 | orchestrator | Tuesday 03 February 2026 03:08:19 +0000 (0:00:00.304) 0:01:38.162 ****** 2026-02-03 03:08:20.961443 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:09:04.026472 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:09:04.026582 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:09:04.026598 | orchestrator | 2026-02-03 03:09:04.026606 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-03 03:09:04.026614 | orchestrator | Tuesday 03 February 2026 03:08:20 +0000 (0:00:00.999) 0:01:39.162 ****** 2026-02-03 03:09:04.026620 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:09:04.026626 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:09:04.026632 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:09:04.026638 | orchestrator | 2026-02-03 03:09:04.026643 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-03 03:09:04.026650 | orchestrator | Tuesday 03 February 2026 03:08:22 +0000 (0:00:01.236) 0:01:40.398 ****** 2026-02-03 03:09:04.026655 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:09:04.026661 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:09:04.026666 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:09:04.026672 | orchestrator | 2026-02-03 03:09:04.026678 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-03 03:09:04.026683 | orchestrator | Tuesday 03 February 2026 03:08:49 +0000 (0:00:27.074) 0:02:07.473 ****** 2026-02-03 03:09:04.026689 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:09:04.026695 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:09:04.026701 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:09:04.026706 | orchestrator | 2026-02-03 03:09:04.026712 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-03 03:09:04.026718 | orchestrator | Tuesday 03 February 2026 03:08:49 +0000 (0:00:00.638) 0:02:08.111 ****** 2026-02-03 03:09:04.026723 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:09:04.026729 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:09:04.026734 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:09:04.026740 | orchestrator | 2026-02-03 03:09:04.026745 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-03 03:09:04.026751 | orchestrator | Tuesday 03 February 2026 03:08:50 +0000 (0:00:00.688) 0:02:08.799 ****** 2026-02-03 03:09:04.026757 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:09:04.026762 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:09:04.026768 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:09:04.026773 | orchestrator | 2026-02-03 03:09:04.026779 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-03 03:09:04.026803 | orchestrator | Tuesday 03 February 2026 03:08:51 +0000 (0:00:00.646) 0:02:09.446 ****** 2026-02-03 03:09:04.026809 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:09:04.026815 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:09:04.026820 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:09:04.026826 | orchestrator | 2026-02-03 03:09:04.026831 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-03 03:09:04.026837 | orchestrator | Tuesday 03 February 2026 03:08:52 +0000 (0:00:00.799) 0:02:10.246 ****** 2026-02-03 03:09:04.026842 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:09:04.026847 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:09:04.026853 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:09:04.026858 | orchestrator | 2026-02-03 03:09:04.026864 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-03 03:09:04.026870 | orchestrator | Tuesday 03 February 2026 03:08:52 +0000 (0:00:00.321) 0:02:10.568 ****** 2026-02-03 03:09:04.026875 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:09:04.026881 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:09:04.026886 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:09:04.026892 | orchestrator | 2026-02-03 03:09:04.026897 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-03 03:09:04.026903 | orchestrator | Tuesday 03 February 2026 03:08:53 +0000 (0:00:00.655) 0:02:11.223 ****** 2026-02-03 03:09:04.026908 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:09:04.026913 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:09:04.026919 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:09:04.026925 | orchestrator | 2026-02-03 03:09:04.026930 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-03 03:09:04.026936 | orchestrator | Tuesday 03 February 2026 03:08:53 +0000 (0:00:00.648) 0:02:11.872 ****** 2026-02-03 03:09:04.026941 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:09:04.026947 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:09:04.026952 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:09:04.026958 | orchestrator | 2026-02-03 03:09:04.026964 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-03 03:09:04.026969 | orchestrator | Tuesday 03 February 2026 03:08:54 +0000 (0:00:00.892) 0:02:12.765 ****** 2026-02-03 03:09:04.026977 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:09:04.026982 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:09:04.026988 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:09:04.026993 | orchestrator | 2026-02-03 03:09:04.026999 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-03 03:09:04.027006 | orchestrator | Tuesday 03 February 2026 03:08:55 +0000 (0:00:01.074) 0:02:13.839 ****** 2026-02-03 03:09:04.027012 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:09:04.027019 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:09:04.027025 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:09:04.027032 | orchestrator | 2026-02-03 03:09:04.027040 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-03 03:09:04.027049 | orchestrator | Tuesday 03 February 2026 03:08:55 +0000 (0:00:00.305) 0:02:14.145 ****** 2026-02-03 03:09:04.027060 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:09:04.027072 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:09:04.027081 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:09:04.027090 | orchestrator | 2026-02-03 03:09:04.027099 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-03 03:09:04.027108 | orchestrator | Tuesday 03 February 2026 03:08:56 +0000 (0:00:00.295) 0:02:14.440 ****** 2026-02-03 03:09:04.027160 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:09:04.027171 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:09:04.027180 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:09:04.027190 | orchestrator | 2026-02-03 03:09:04.027199 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-03 03:09:04.027209 | orchestrator | Tuesday 03 February 2026 03:08:56 +0000 (0:00:00.640) 0:02:15.080 ****** 2026-02-03 03:09:04.027227 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:09:04.027236 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:09:04.027261 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:09:04.027269 | orchestrator | 2026-02-03 03:09:04.027276 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-03 03:09:04.027285 | orchestrator | Tuesday 03 February 2026 03:08:57 +0000 (0:00:00.920) 0:02:16.001 ****** 2026-02-03 03:09:04.027292 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-03 03:09:04.027299 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-03 03:09:04.027305 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-03 03:09:04.027312 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-03 03:09:04.027318 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-03 03:09:04.027325 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-03 03:09:04.027331 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-03 03:09:04.027339 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-03 03:09:04.027346 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-03 03:09:04.027353 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-03 03:09:04.027360 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-03 03:09:04.027367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-03 03:09:04.027374 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-03 03:09:04.027379 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-03 03:09:04.027385 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-03 03:09:04.027390 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-03 03:09:04.027396 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-03 03:09:04.027401 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-03 03:09:04.027407 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-03 03:09:04.027412 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-03 03:09:04.027418 | orchestrator | 2026-02-03 03:09:04.027424 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-03 03:09:04.027429 | orchestrator | 2026-02-03 03:09:04.027435 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-03 03:09:04.027440 | orchestrator | Tuesday 03 February 2026 03:09:00 +0000 (0:00:03.047) 0:02:19.049 ****** 2026-02-03 03:09:04.027446 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:09:04.027451 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:09:04.027457 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:09:04.027462 | orchestrator | 2026-02-03 03:09:04.027480 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-03 03:09:04.027486 | orchestrator | Tuesday 03 February 2026 03:09:01 +0000 (0:00:00.330) 0:02:19.380 ****** 2026-02-03 03:09:04.027492 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:09:04.027497 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:09:04.027502 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:09:04.027512 | orchestrator | 2026-02-03 03:09:04.027517 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-03 03:09:04.027523 | orchestrator | Tuesday 03 February 2026 03:09:02 +0000 (0:00:00.902) 0:02:20.282 ****** 2026-02-03 03:09:04.027528 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:09:04.027534 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:09:04.027539 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:09:04.027545 | orchestrator | 2026-02-03 03:09:04.027550 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-03 03:09:04.027556 | orchestrator | Tuesday 03 February 2026 03:09:02 +0000 (0:00:00.375) 0:02:20.658 ****** 2026-02-03 03:09:04.027562 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:09:04.027572 | orchestrator | 2026-02-03 03:09:04.027586 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-03 03:09:04.027595 | orchestrator | Tuesday 03 February 2026 03:09:02 +0000 (0:00:00.498) 0:02:21.157 ****** 2026-02-03 03:09:04.027604 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:09:04.027613 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:09:04.027621 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:09:04.027630 | orchestrator | 2026-02-03 03:09:04.027638 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-03 03:09:04.027646 | orchestrator | Tuesday 03 February 2026 03:09:03 +0000 (0:00:00.529) 0:02:21.686 ****** 2026-02-03 03:09:04.027655 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:09:04.027663 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:09:04.027672 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:09:04.027681 | orchestrator | 2026-02-03 03:09:04.027690 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-03 03:09:04.027698 | orchestrator | Tuesday 03 February 2026 03:09:03 +0000 (0:00:00.323) 0:02:22.009 ****** 2026-02-03 03:09:04.027714 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:10:44.956858 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:10:44.956969 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:10:44.956984 | orchestrator | 2026-02-03 03:10:44.956997 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-03 03:10:44.957010 | orchestrator | Tuesday 03 February 2026 03:09:04 +0000 (0:00:00.357) 0:02:22.367 ****** 2026-02-03 03:10:44.957021 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:10:44.957032 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:10:44.957043 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:10:44.957055 | orchestrator | 2026-02-03 03:10:44.957066 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-03 03:10:44.957077 | orchestrator | Tuesday 03 February 2026 03:09:04 +0000 (0:00:00.624) 0:02:22.991 ****** 2026-02-03 03:10:44.957088 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:10:44.957099 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:10:44.957110 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:10:44.957121 | orchestrator | 2026-02-03 03:10:44.957132 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-03 03:10:44.957143 | orchestrator | Tuesday 03 February 2026 03:09:06 +0000 (0:00:01.389) 0:02:24.381 ****** 2026-02-03 03:10:44.957154 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:10:44.957165 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:10:44.957176 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:10:44.957187 | orchestrator | 2026-02-03 03:10:44.957198 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-03 03:10:44.957209 | orchestrator | Tuesday 03 February 2026 03:09:07 +0000 (0:00:01.373) 0:02:25.755 ****** 2026-02-03 03:10:44.957266 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:10:44.957278 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:10:44.957289 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:10:44.957300 | orchestrator | 2026-02-03 03:10:44.957311 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-03 03:10:44.957383 | orchestrator | 2026-02-03 03:10:44.957396 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-03 03:10:44.957407 | orchestrator | Tuesday 03 February 2026 03:09:17 +0000 (0:00:10.025) 0:02:35.780 ****** 2026-02-03 03:10:44.957417 | orchestrator | ok: [testbed-manager] 2026-02-03 03:10:44.957429 | orchestrator | 2026-02-03 03:10:44.957441 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-03 03:10:44.957451 | orchestrator | Tuesday 03 February 2026 03:09:18 +0000 (0:00:00.859) 0:02:36.640 ****** 2026-02-03 03:10:44.957462 | orchestrator | changed: [testbed-manager] 2026-02-03 03:10:44.957474 | orchestrator | 2026-02-03 03:10:44.957485 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-03 03:10:44.957496 | orchestrator | Tuesday 03 February 2026 03:09:19 +0000 (0:00:00.671) 0:02:37.312 ****** 2026-02-03 03:10:44.957507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-03 03:10:44.957518 | orchestrator | 2026-02-03 03:10:44.957529 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-03 03:10:44.957540 | orchestrator | Tuesday 03 February 2026 03:09:19 +0000 (0:00:00.573) 0:02:37.885 ****** 2026-02-03 03:10:44.957559 | orchestrator | changed: [testbed-manager] 2026-02-03 03:10:44.957577 | orchestrator | 2026-02-03 03:10:44.957596 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-03 03:10:44.957614 | orchestrator | Tuesday 03 February 2026 03:09:20 +0000 (0:00:00.950) 0:02:38.836 ****** 2026-02-03 03:10:44.957632 | orchestrator | changed: [testbed-manager] 2026-02-03 03:10:44.957650 | orchestrator | 2026-02-03 03:10:44.957667 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-03 03:10:44.957683 | orchestrator | Tuesday 03 February 2026 03:09:21 +0000 (0:00:00.581) 0:02:39.418 ****** 2026-02-03 03:10:44.957699 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-03 03:10:44.957716 | orchestrator | 2026-02-03 03:10:44.957734 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-03 03:10:44.957750 | orchestrator | Tuesday 03 February 2026 03:09:22 +0000 (0:00:01.611) 0:02:41.029 ****** 2026-02-03 03:10:44.957802 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-03 03:10:44.957865 | orchestrator | 2026-02-03 03:10:44.957911 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-03 03:10:44.957930 | orchestrator | Tuesday 03 February 2026 03:09:23 +0000 (0:00:00.860) 0:02:41.890 ****** 2026-02-03 03:10:44.957948 | orchestrator | changed: [testbed-manager] 2026-02-03 03:10:44.957964 | orchestrator | 2026-02-03 03:10:44.957981 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-03 03:10:44.957997 | orchestrator | Tuesday 03 February 2026 03:09:24 +0000 (0:00:00.486) 0:02:42.376 ****** 2026-02-03 03:10:44.958014 | orchestrator | changed: [testbed-manager] 2026-02-03 03:10:44.958106 | orchestrator | 2026-02-03 03:10:44.958124 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-03 03:10:44.958141 | orchestrator | 2026-02-03 03:10:44.958160 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-03 03:10:44.958178 | orchestrator | Tuesday 03 February 2026 03:09:24 +0000 (0:00:00.480) 0:02:42.857 ****** 2026-02-03 03:10:44.958197 | orchestrator | ok: [testbed-manager] 2026-02-03 03:10:44.958254 | orchestrator | 2026-02-03 03:10:44.958274 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-03 03:10:44.958291 | orchestrator | Tuesday 03 February 2026 03:09:24 +0000 (0:00:00.172) 0:02:43.029 ****** 2026-02-03 03:10:44.958307 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 03:10:44.958324 | orchestrator | 2026-02-03 03:10:44.958340 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-03 03:10:44.958357 | orchestrator | Tuesday 03 February 2026 03:09:25 +0000 (0:00:00.489) 0:02:43.519 ****** 2026-02-03 03:10:44.958374 | orchestrator | ok: [testbed-manager] 2026-02-03 03:10:44.958391 | orchestrator | 2026-02-03 03:10:44.958430 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-03 03:10:44.958449 | orchestrator | Tuesday 03 February 2026 03:09:26 +0000 (0:00:00.872) 0:02:44.391 ****** 2026-02-03 03:10:44.958467 | orchestrator | ok: [testbed-manager] 2026-02-03 03:10:44.958485 | orchestrator | 2026-02-03 03:10:44.958532 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-03 03:10:44.958546 | orchestrator | Tuesday 03 February 2026 03:09:27 +0000 (0:00:01.786) 0:02:46.178 ****** 2026-02-03 03:10:44.958557 | orchestrator | changed: [testbed-manager] 2026-02-03 03:10:44.958568 | orchestrator | 2026-02-03 03:10:44.958579 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-03 03:10:44.958589 | orchestrator | Tuesday 03 February 2026 03:09:28 +0000 (0:00:00.894) 0:02:47.072 ****** 2026-02-03 03:10:44.958600 | orchestrator | ok: [testbed-manager] 2026-02-03 03:10:44.958611 | orchestrator | 2026-02-03 03:10:44.958621 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-03 03:10:44.958632 | orchestrator | Tuesday 03 February 2026 03:09:29 +0000 (0:00:00.492) 0:02:47.565 ****** 2026-02-03 03:10:44.958643 | orchestrator | changed: [testbed-manager] 2026-02-03 03:10:44.958654 | orchestrator | 2026-02-03 03:10:44.958664 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-03 03:10:44.958675 | orchestrator | Tuesday 03 February 2026 03:09:38 +0000 (0:00:08.806) 0:02:56.371 ****** 2026-02-03 03:10:44.958691 | orchestrator | changed: [testbed-manager] 2026-02-03 03:10:44.958709 | orchestrator | 2026-02-03 03:10:44.958726 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-03 03:10:44.958743 | orchestrator | Tuesday 03 February 2026 03:09:51 +0000 (0:00:13.268) 0:03:09.639 ****** 2026-02-03 03:10:44.958760 | orchestrator | ok: [testbed-manager] 2026-02-03 03:10:44.958777 | orchestrator | 2026-02-03 03:10:44.958793 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-03 03:10:44.958812 | orchestrator | 2026-02-03 03:10:44.958828 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-03 03:10:44.958847 | orchestrator | Tuesday 03 February 2026 03:09:52 +0000 (0:00:00.773) 0:03:10.412 ****** 2026-02-03 03:10:44.958865 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:10:44.958884 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:10:44.958901 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:10:44.958918 | orchestrator | 2026-02-03 03:10:44.958938 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-03 03:10:44.958955 | orchestrator | Tuesday 03 February 2026 03:09:52 +0000 (0:00:00.322) 0:03:10.735 ****** 2026-02-03 03:10:44.958975 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:10:44.958993 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:10:44.959010 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:10:44.959029 | orchestrator | 2026-02-03 03:10:44.959047 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-03 03:10:44.959065 | orchestrator | Tuesday 03 February 2026 03:09:52 +0000 (0:00:00.355) 0:03:11.090 ****** 2026-02-03 03:10:44.959084 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:10:44.959102 | orchestrator | 2026-02-03 03:10:44.959119 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-03 03:10:44.959136 | orchestrator | Tuesday 03 February 2026 03:09:53 +0000 (0:00:00.543) 0:03:11.633 ****** 2026-02-03 03:10:44.959154 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-03 03:10:44.959171 | orchestrator | 2026-02-03 03:10:44.959190 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-03 03:10:44.959208 | orchestrator | Tuesday 03 February 2026 03:09:54 +0000 (0:00:00.987) 0:03:12.621 ****** 2026-02-03 03:10:44.959256 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 03:10:44.959276 | orchestrator | 2026-02-03 03:10:44.959294 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-03 03:10:44.959328 | orchestrator | Tuesday 03 February 2026 03:09:55 +0000 (0:00:00.901) 0:03:13.522 ****** 2026-02-03 03:10:44.959348 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:10:44.959367 | orchestrator | 2026-02-03 03:10:44.959385 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-03 03:10:44.959401 | orchestrator | Tuesday 03 February 2026 03:09:55 +0000 (0:00:00.121) 0:03:13.644 ****** 2026-02-03 03:10:44.959412 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 03:10:44.959423 | orchestrator | 2026-02-03 03:10:44.959434 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-03 03:10:44.959445 | orchestrator | Tuesday 03 February 2026 03:09:56 +0000 (0:00:01.013) 0:03:14.658 ****** 2026-02-03 03:10:44.959456 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:10:44.959467 | orchestrator | 2026-02-03 03:10:44.959478 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-03 03:10:44.959489 | orchestrator | Tuesday 03 February 2026 03:09:56 +0000 (0:00:00.138) 0:03:14.797 ****** 2026-02-03 03:10:44.959500 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:10:44.959510 | orchestrator | 2026-02-03 03:10:44.959522 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-03 03:10:44.959532 | orchestrator | Tuesday 03 February 2026 03:09:56 +0000 (0:00:00.135) 0:03:14.933 ****** 2026-02-03 03:10:44.959543 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:10:44.959554 | orchestrator | 2026-02-03 03:10:44.959565 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-03 03:10:44.959587 | orchestrator | Tuesday 03 February 2026 03:09:56 +0000 (0:00:00.141) 0:03:15.075 ****** 2026-02-03 03:10:44.959599 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:10:44.959610 | orchestrator | 2026-02-03 03:10:44.959621 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-03 03:10:44.959632 | orchestrator | Tuesday 03 February 2026 03:09:57 +0000 (0:00:00.145) 0:03:15.220 ****** 2026-02-03 03:10:44.959643 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-03 03:10:44.959654 | orchestrator | 2026-02-03 03:10:44.959664 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-03 03:10:44.959675 | orchestrator | Tuesday 03 February 2026 03:10:02 +0000 (0:00:05.463) 0:03:20.685 ****** 2026-02-03 03:10:44.959686 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-03 03:10:44.959697 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-03 03:10:44.959721 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-03 03:11:09.009178 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-03 03:11:09.009358 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-03 03:11:09.009384 | orchestrator | 2026-02-03 03:11:09.009402 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-03 03:11:09.009417 | orchestrator | Tuesday 03 February 2026 03:10:44 +0000 (0:00:42.465) 0:04:03.150 ****** 2026-02-03 03:11:09.009433 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 03:11:09.009446 | orchestrator | 2026-02-03 03:11:09.009460 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-03 03:11:09.009476 | orchestrator | Tuesday 03 February 2026 03:10:46 +0000 (0:00:01.381) 0:04:04.532 ****** 2026-02-03 03:11:09.009491 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-03 03:11:09.009506 | orchestrator | 2026-02-03 03:11:09.009519 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-03 03:11:09.009534 | orchestrator | Tuesday 03 February 2026 03:10:47 +0000 (0:00:01.592) 0:04:06.124 ****** 2026-02-03 03:11:09.009548 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-03 03:11:09.009562 | orchestrator | 2026-02-03 03:11:09.009577 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-03 03:11:09.009591 | orchestrator | Tuesday 03 February 2026 03:10:49 +0000 (0:00:01.377) 0:04:07.502 ****** 2026-02-03 03:11:09.009631 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:11:09.009647 | orchestrator | 2026-02-03 03:11:09.009663 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-03 03:11:09.009679 | orchestrator | Tuesday 03 February 2026 03:10:49 +0000 (0:00:00.125) 0:04:07.628 ****** 2026-02-03 03:11:09.009695 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-03 03:11:09.009712 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-03 03:11:09.009727 | orchestrator | 2026-02-03 03:11:09.009740 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-03 03:11:09.009755 | orchestrator | Tuesday 03 February 2026 03:10:51 +0000 (0:00:01.981) 0:04:09.609 ****** 2026-02-03 03:11:09.009769 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:11:09.009784 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:11:09.009799 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:11:09.009813 | orchestrator | 2026-02-03 03:11:09.009829 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-03 03:11:09.009843 | orchestrator | Tuesday 03 February 2026 03:10:51 +0000 (0:00:00.370) 0:04:09.980 ****** 2026-02-03 03:11:09.009858 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:11:09.009873 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:11:09.009889 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:11:09.009905 | orchestrator | 2026-02-03 03:11:09.009919 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-03 03:11:09.009934 | orchestrator | 2026-02-03 03:11:09.009949 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-03 03:11:09.009964 | orchestrator | Tuesday 03 February 2026 03:10:52 +0000 (0:00:00.916) 0:04:10.896 ****** 2026-02-03 03:11:09.009979 | orchestrator | ok: [testbed-manager] 2026-02-03 03:11:09.009995 | orchestrator | 2026-02-03 03:11:09.010013 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-03 03:11:09.010117 | orchestrator | Tuesday 03 February 2026 03:10:53 +0000 (0:00:00.340) 0:04:11.236 ****** 2026-02-03 03:11:09.010134 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 03:11:09.010150 | orchestrator | 2026-02-03 03:11:09.010165 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-03 03:11:09.010180 | orchestrator | Tuesday 03 February 2026 03:10:53 +0000 (0:00:00.219) 0:04:11.456 ****** 2026-02-03 03:11:09.010195 | orchestrator | changed: [testbed-manager] 2026-02-03 03:11:09.010208 | orchestrator | 2026-02-03 03:11:09.010223 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-03 03:11:09.010237 | orchestrator | 2026-02-03 03:11:09.010286 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-03 03:11:09.010302 | orchestrator | Tuesday 03 February 2026 03:10:58 +0000 (0:00:05.398) 0:04:16.855 ****** 2026-02-03 03:11:09.010317 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:11:09.010333 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:11:09.010348 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:11:09.010364 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:11:09.010378 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:11:09.010393 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:11:09.010408 | orchestrator | 2026-02-03 03:11:09.010423 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-03 03:11:09.010439 | orchestrator | Tuesday 03 February 2026 03:10:59 +0000 (0:00:00.607) 0:04:17.462 ****** 2026-02-03 03:11:09.010455 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-03 03:11:09.010470 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-03 03:11:09.010485 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-03 03:11:09.010500 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-03 03:11:09.010532 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-03 03:11:09.010548 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-03 03:11:09.010562 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-03 03:11:09.010577 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-03 03:11:09.010613 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-03 03:11:09.010658 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-03 03:11:09.010674 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-03 03:11:09.010690 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-03 03:11:09.010705 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-03 03:11:09.010719 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-03 03:11:09.010734 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-03 03:11:09.010769 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-03 03:11:09.010784 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-03 03:11:09.010798 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-03 03:11:09.010812 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-03 03:11:09.010827 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-03 03:11:09.010842 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-03 03:11:09.010857 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-03 03:11:09.010931 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-03 03:11:09.010948 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-03 03:11:09.010963 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-03 03:11:09.010979 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-03 03:11:09.010993 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-03 03:11:09.011007 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-03 03:11:09.011021 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-03 03:11:09.011030 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-03 03:11:09.011039 | orchestrator | 2026-02-03 03:11:09.011049 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-03 03:11:09.011058 | orchestrator | Tuesday 03 February 2026 03:11:07 +0000 (0:00:08.495) 0:04:25.958 ****** 2026-02-03 03:11:09.011067 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:11:09.011076 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:11:09.011084 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:11:09.011093 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:11:09.011102 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:11:09.011111 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:11:09.011120 | orchestrator | 2026-02-03 03:11:09.011128 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-03 03:11:09.011137 | orchestrator | Tuesday 03 February 2026 03:11:08 +0000 (0:00:00.559) 0:04:26.518 ****** 2026-02-03 03:11:09.011146 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:11:09.011166 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:11:09.011175 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:11:09.011184 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:11:09.011192 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:11:09.011201 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:11:09.011210 | orchestrator | 2026-02-03 03:11:09.011218 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:11:09.011228 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:11:09.011260 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-03 03:11:09.011276 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-03 03:11:09.011290 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-03 03:11:09.011306 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 03:11:09.011320 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 03:11:09.011333 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 03:11:09.011348 | orchestrator | 2026-02-03 03:11:09.011357 | orchestrator | 2026-02-03 03:11:09.011366 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:11:09.011375 | orchestrator | Tuesday 03 February 2026 03:11:08 +0000 (0:00:00.683) 0:04:27.201 ****** 2026-02-03 03:11:09.011396 | orchestrator | =============================================================================== 2026-02-03 03:11:09.415545 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.04s 2026-02-03 03:11:09.415703 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.47s 2026-02-03 03:11:09.415726 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.07s 2026-02-03 03:11:09.415743 | orchestrator | kubectl : Install required packages ------------------------------------ 13.27s 2026-02-03 03:11:09.415759 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.03s 2026-02-03 03:11:09.415775 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.81s 2026-02-03 03:11:09.415792 | orchestrator | Manage labels ----------------------------------------------------------- 8.50s 2026-02-03 03:11:09.415808 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.46s 2026-02-03 03:11:09.415824 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.40s 2026-02-03 03:11:09.415841 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.11s 2026-02-03 03:11:09.415858 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.05s 2026-02-03 03:11:09.415876 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.74s 2026-02-03 03:11:09.415889 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.72s 2026-02-03 03:11:09.415904 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.13s 2026-02-03 03:11:09.415918 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.98s 2026-02-03 03:11:09.415933 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.96s 2026-02-03 03:11:09.415949 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.93s 2026-02-03 03:11:09.415996 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.79s 2026-02-03 03:11:09.416012 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 1.77s 2026-02-03 03:11:09.416029 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.61s 2026-02-03 03:11:09.749420 | orchestrator | + osism apply copy-kubeconfig 2026-02-03 03:11:21.969577 | orchestrator | 2026-02-03 03:11:21 | INFO  | Task 79cd812a-68ab-4825-a443-3765f98e8f20 (copy-kubeconfig) was prepared for execution. 2026-02-03 03:11:21.969719 | orchestrator | 2026-02-03 03:11:21 | INFO  | It takes a moment until task 79cd812a-68ab-4825-a443-3765f98e8f20 (copy-kubeconfig) has been started and output is visible here. 2026-02-03 03:11:29.171206 | orchestrator | 2026-02-03 03:11:29.171306 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-03 03:11:29.171315 | orchestrator | 2026-02-03 03:11:29.171321 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-03 03:11:29.171326 | orchestrator | Tuesday 03 February 2026 03:11:26 +0000 (0:00:00.167) 0:00:00.167 ****** 2026-02-03 03:11:29.171331 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-03 03:11:29.171335 | orchestrator | 2026-02-03 03:11:29.171340 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-03 03:11:29.171345 | orchestrator | Tuesday 03 February 2026 03:11:27 +0000 (0:00:00.777) 0:00:00.944 ****** 2026-02-03 03:11:29.171363 | orchestrator | changed: [testbed-manager] 2026-02-03 03:11:29.171368 | orchestrator | 2026-02-03 03:11:29.171373 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-03 03:11:29.171377 | orchestrator | Tuesday 03 February 2026 03:11:28 +0000 (0:00:01.310) 0:00:02.255 ****** 2026-02-03 03:11:29.171384 | orchestrator | changed: [testbed-manager] 2026-02-03 03:11:29.171388 | orchestrator | 2026-02-03 03:11:29.171396 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:11:29.171400 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:11:29.171406 | orchestrator | 2026-02-03 03:11:29.171410 | orchestrator | 2026-02-03 03:11:29.171414 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:11:29.171419 | orchestrator | Tuesday 03 February 2026 03:11:28 +0000 (0:00:00.471) 0:00:02.727 ****** 2026-02-03 03:11:29.171423 | orchestrator | =============================================================================== 2026-02-03 03:11:29.171427 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.31s 2026-02-03 03:11:29.171431 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2026-02-03 03:11:29.171435 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2026-02-03 03:11:29.516653 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-03 03:11:41.715188 | orchestrator | 2026-02-03 03:11:41 | INFO  | Task 9e1556b2-7783-4c93-b9fc-b7d7ce95d297 (openstackclient) was prepared for execution. 2026-02-03 03:11:41.715341 | orchestrator | 2026-02-03 03:11:41 | INFO  | It takes a moment until task 9e1556b2-7783-4c93-b9fc-b7d7ce95d297 (openstackclient) has been started and output is visible here. 2026-02-03 03:12:29.596035 | orchestrator | 2026-02-03 03:12:29.596184 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-03 03:12:29.596209 | orchestrator | 2026-02-03 03:12:29.596226 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-03 03:12:29.596242 | orchestrator | Tuesday 03 February 2026 03:11:46 +0000 (0:00:00.248) 0:00:00.248 ****** 2026-02-03 03:12:29.596258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-03 03:12:29.596274 | orchestrator | 2026-02-03 03:12:29.596352 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-03 03:12:29.596371 | orchestrator | Tuesday 03 February 2026 03:11:46 +0000 (0:00:00.231) 0:00:00.480 ****** 2026-02-03 03:12:29.596387 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-03 03:12:29.596404 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-03 03:12:29.596419 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-03 03:12:29.596435 | orchestrator | 2026-02-03 03:12:29.596451 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-03 03:12:29.596467 | orchestrator | Tuesday 03 February 2026 03:11:47 +0000 (0:00:01.304) 0:00:01.785 ****** 2026-02-03 03:12:29.596483 | orchestrator | changed: [testbed-manager] 2026-02-03 03:12:29.596495 | orchestrator | 2026-02-03 03:12:29.596504 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-03 03:12:29.596513 | orchestrator | Tuesday 03 February 2026 03:11:49 +0000 (0:00:01.471) 0:00:03.256 ****** 2026-02-03 03:12:29.596522 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-03 03:12:29.596531 | orchestrator | ok: [testbed-manager] 2026-02-03 03:12:29.596541 | orchestrator | 2026-02-03 03:12:29.596550 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-03 03:12:29.596559 | orchestrator | Tuesday 03 February 2026 03:12:24 +0000 (0:00:35.144) 0:00:38.400 ****** 2026-02-03 03:12:29.596568 | orchestrator | changed: [testbed-manager] 2026-02-03 03:12:29.596579 | orchestrator | 2026-02-03 03:12:29.596594 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-03 03:12:29.596608 | orchestrator | Tuesday 03 February 2026 03:12:25 +0000 (0:00:00.952) 0:00:39.353 ****** 2026-02-03 03:12:29.596623 | orchestrator | ok: [testbed-manager] 2026-02-03 03:12:29.596639 | orchestrator | 2026-02-03 03:12:29.596656 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-03 03:12:29.596671 | orchestrator | Tuesday 03 February 2026 03:12:25 +0000 (0:00:00.624) 0:00:39.978 ****** 2026-02-03 03:12:29.596687 | orchestrator | changed: [testbed-manager] 2026-02-03 03:12:29.596701 | orchestrator | 2026-02-03 03:12:29.596713 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-03 03:12:29.596723 | orchestrator | Tuesday 03 February 2026 03:12:27 +0000 (0:00:01.469) 0:00:41.447 ****** 2026-02-03 03:12:29.596733 | orchestrator | changed: [testbed-manager] 2026-02-03 03:12:29.596742 | orchestrator | 2026-02-03 03:12:29.596752 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-03 03:12:29.596761 | orchestrator | Tuesday 03 February 2026 03:12:28 +0000 (0:00:00.723) 0:00:42.170 ****** 2026-02-03 03:12:29.596770 | orchestrator | changed: [testbed-manager] 2026-02-03 03:12:29.596779 | orchestrator | 2026-02-03 03:12:29.596788 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-03 03:12:29.596797 | orchestrator | Tuesday 03 February 2026 03:12:28 +0000 (0:00:00.654) 0:00:42.825 ****** 2026-02-03 03:12:29.596806 | orchestrator | ok: [testbed-manager] 2026-02-03 03:12:29.596815 | orchestrator | 2026-02-03 03:12:29.596825 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:12:29.596834 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:12:29.596845 | orchestrator | 2026-02-03 03:12:29.596854 | orchestrator | 2026-02-03 03:12:29.596863 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:12:29.596872 | orchestrator | Tuesday 03 February 2026 03:12:29 +0000 (0:00:00.460) 0:00:43.285 ****** 2026-02-03 03:12:29.596881 | orchestrator | =============================================================================== 2026-02-03 03:12:29.596890 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.14s 2026-02-03 03:12:29.596900 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.47s 2026-02-03 03:12:29.596919 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.47s 2026-02-03 03:12:29.596929 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.30s 2026-02-03 03:12:29.596938 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.95s 2026-02-03 03:12:29.596947 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.72s 2026-02-03 03:12:29.596956 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.65s 2026-02-03 03:12:29.596965 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.62s 2026-02-03 03:12:29.596973 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.46s 2026-02-03 03:12:29.596981 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.23s 2026-02-03 03:12:32.008498 | orchestrator | 2026-02-03 03:12:32 | INFO  | Task 8d24d80f-69a6-4cdd-aaa9-8b0dc5d4d9c2 (common) was prepared for execution. 2026-02-03 03:12:32.008594 | orchestrator | 2026-02-03 03:12:32 | INFO  | It takes a moment until task 8d24d80f-69a6-4cdd-aaa9-8b0dc5d4d9c2 (common) has been started and output is visible here. 2026-02-03 03:12:44.514488 | orchestrator | 2026-02-03 03:12:44.514597 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-03 03:12:44.514613 | orchestrator | 2026-02-03 03:12:44.514624 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-03 03:12:44.514635 | orchestrator | Tuesday 03 February 2026 03:12:36 +0000 (0:00:00.284) 0:00:00.284 ****** 2026-02-03 03:12:44.514646 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:12:44.514657 | orchestrator | 2026-02-03 03:12:44.514668 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-03 03:12:44.514678 | orchestrator | Tuesday 03 February 2026 03:12:37 +0000 (0:00:01.354) 0:00:01.639 ****** 2026-02-03 03:12:44.514689 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 03:12:44.514699 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 03:12:44.514709 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 03:12:44.514720 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 03:12:44.514734 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 03:12:44.514751 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 03:12:44.514777 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 03:12:44.514794 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 03:12:44.514810 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 03:12:44.514848 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 03:12:44.514865 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 03:12:44.514882 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 03:12:44.514900 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 03:12:44.514916 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 03:12:44.514932 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 03:12:44.514950 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 03:12:44.514966 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 03:12:44.515007 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 03:12:44.515023 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 03:12:44.515039 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 03:12:44.515063 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 03:12:44.515082 | orchestrator | 2026-02-03 03:12:44.515099 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-03 03:12:44.515115 | orchestrator | Tuesday 03 February 2026 03:12:40 +0000 (0:00:02.688) 0:00:04.328 ****** 2026-02-03 03:12:44.515132 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:12:44.515148 | orchestrator | 2026-02-03 03:12:44.515164 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-03 03:12:44.515189 | orchestrator | Tuesday 03 February 2026 03:12:41 +0000 (0:00:01.368) 0:00:05.697 ****** 2026-02-03 03:12:44.515211 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:44.515234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:44.515282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:44.515296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:44.515308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:44.515319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:44.515369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:44.515381 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:44.515391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:44.515419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756199 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756212 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:45.756274 | orchestrator | 2026-02-03 03:12:45.756280 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-03 03:12:45.756286 | orchestrator | Tuesday 03 February 2026 03:12:45 +0000 (0:00:03.757) 0:00:09.454 ****** 2026-02-03 03:12:45.756293 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:45.756298 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:45.756304 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:45.756309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:45.756320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382405 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:12:46.382464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:46.382478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382499 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:12:46.382511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:46.382526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382546 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:12:46.382580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:46.382600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382619 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:12:46.382625 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:12:46.382632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:46.382638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:46.382650 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:12:46.382656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:46.382669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.228967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.229029 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:12:47.229036 | orchestrator | 2026-02-03 03:12:47.229042 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-03 03:12:47.229047 | orchestrator | Tuesday 03 February 2026 03:12:46 +0000 (0:00:00.930) 0:00:10.385 ****** 2026-02-03 03:12:47.229052 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:47.229059 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.229063 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.229067 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:12:47.229083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:47.229090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.229111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.229115 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:12:47.229134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:47.229138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.229142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.229146 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:12:47.229150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:47.229154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.229160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:47.229168 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:12:47.229172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:47.229185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:52.305482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:52.305630 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:12:52.305647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:52.305658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:52.305665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:52.305671 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:12:52.305677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 03:12:52.305703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:52.305710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:12:52.305716 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:12:52.305722 | orchestrator | 2026-02-03 03:12:52.305729 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-03 03:12:52.305736 | orchestrator | Tuesday 03 February 2026 03:12:48 +0000 (0:00:01.764) 0:00:12.149 ****** 2026-02-03 03:12:52.305742 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:12:52.305748 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:12:52.305754 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:12:52.305760 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:12:52.305781 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:12:52.305787 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:12:52.305793 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:12:52.305798 | orchestrator | 2026-02-03 03:12:52.305805 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-03 03:12:52.305811 | orchestrator | Tuesday 03 February 2026 03:12:48 +0000 (0:00:00.692) 0:00:12.841 ****** 2026-02-03 03:12:52.305817 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:12:52.305822 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:12:52.305828 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:12:52.305834 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:12:52.305840 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:12:52.305846 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:12:52.305852 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:12:52.305857 | orchestrator | 2026-02-03 03:12:52.305863 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-03 03:12:52.305869 | orchestrator | Tuesday 03 February 2026 03:12:49 +0000 (0:00:00.876) 0:00:13.718 ****** 2026-02-03 03:12:52.305876 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:52.305896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:52.305907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:52.305917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:52.305923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:52.305930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:52.305941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:12:55.139479 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139824 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:12:55.139921 | orchestrator | 2026-02-03 03:12:55.139935 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-03 03:12:55.139950 | orchestrator | Tuesday 03 February 2026 03:12:53 +0000 (0:00:03.492) 0:00:17.210 ****** 2026-02-03 03:12:55.139968 | orchestrator | [WARNING]: Skipped 2026-02-03 03:12:55.139988 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-03 03:12:55.140008 | orchestrator | to this access issue: 2026-02-03 03:12:55.140027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-03 03:12:55.140039 | orchestrator | directory 2026-02-03 03:12:55.140052 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 03:12:55.140066 | orchestrator | 2026-02-03 03:12:55.140081 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-03 03:12:55.140100 | orchestrator | Tuesday 03 February 2026 03:12:54 +0000 (0:00:00.965) 0:00:18.176 ****** 2026-02-03 03:12:55.140116 | orchestrator | [WARNING]: Skipped 2026-02-03 03:12:55.140135 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-03 03:13:05.286383 | orchestrator | to this access issue: 2026-02-03 03:13:05.286467 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-03 03:13:05.286481 | orchestrator | directory 2026-02-03 03:13:05.286486 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 03:13:05.286492 | orchestrator | 2026-02-03 03:13:05.286496 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-03 03:13:05.286502 | orchestrator | Tuesday 03 February 2026 03:12:55 +0000 (0:00:01.258) 0:00:19.435 ****** 2026-02-03 03:13:05.286521 | orchestrator | [WARNING]: Skipped 2026-02-03 03:13:05.286525 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-03 03:13:05.286529 | orchestrator | to this access issue: 2026-02-03 03:13:05.286533 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-03 03:13:05.286537 | orchestrator | directory 2026-02-03 03:13:05.286541 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 03:13:05.286545 | orchestrator | 2026-02-03 03:13:05.286549 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-03 03:13:05.286554 | orchestrator | Tuesday 03 February 2026 03:12:56 +0000 (0:00:00.876) 0:00:20.312 ****** 2026-02-03 03:13:05.286557 | orchestrator | [WARNING]: Skipped 2026-02-03 03:13:05.286561 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-03 03:13:05.286565 | orchestrator | to this access issue: 2026-02-03 03:13:05.286569 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-03 03:13:05.286573 | orchestrator | directory 2026-02-03 03:13:05.286577 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 03:13:05.286580 | orchestrator | 2026-02-03 03:13:05.286584 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-03 03:13:05.286588 | orchestrator | Tuesday 03 February 2026 03:12:57 +0000 (0:00:00.870) 0:00:21.182 ****** 2026-02-03 03:13:05.286592 | orchestrator | changed: [testbed-manager] 2026-02-03 03:13:05.286596 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:13:05.286600 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:13:05.286604 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:13:05.286608 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:13:05.286612 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:13:05.286626 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:13:05.286630 | orchestrator | 2026-02-03 03:13:05.286634 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-03 03:13:05.286638 | orchestrator | Tuesday 03 February 2026 03:12:59 +0000 (0:00:02.597) 0:00:23.779 ****** 2026-02-03 03:13:05.286642 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 03:13:05.286647 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 03:13:05.286651 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 03:13:05.286655 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 03:13:05.286658 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 03:13:05.286662 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 03:13:05.286670 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 03:13:05.286674 | orchestrator | 2026-02-03 03:13:05.286678 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-03 03:13:05.286682 | orchestrator | Tuesday 03 February 2026 03:13:01 +0000 (0:00:02.164) 0:00:25.944 ****** 2026-02-03 03:13:05.286686 | orchestrator | changed: [testbed-manager] 2026-02-03 03:13:05.286690 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:13:05.286694 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:13:05.286697 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:13:05.286701 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:13:05.286705 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:13:05.286709 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:13:05.286713 | orchestrator | 2026-02-03 03:13:05.286717 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-03 03:13:05.286724 | orchestrator | Tuesday 03 February 2026 03:13:03 +0000 (0:00:02.007) 0:00:27.951 ****** 2026-02-03 03:13:05.286730 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:05.286745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:13:05.286750 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:05.286754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:13:05.286758 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:05.286765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:13:05.286769 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:05.286776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:13:05.286785 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:05.286794 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:11.430285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:13:11.430427 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:11.430452 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:11.430496 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:11.430513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:13:11.430551 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:11.430566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:13:11.430600 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:11.430614 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:11.430628 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:11.430642 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:11.430656 | orchestrator | 2026-02-03 03:13:11.430671 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-03 03:13:11.430686 | orchestrator | Tuesday 03 February 2026 03:13:05 +0000 (0:00:01.604) 0:00:29.555 ****** 2026-02-03 03:13:11.430694 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 03:13:11.430701 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 03:13:11.430715 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 03:13:11.430723 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 03:13:11.430730 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 03:13:11.430737 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 03:13:11.430745 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 03:13:11.430752 | orchestrator | 2026-02-03 03:13:11.430759 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-03 03:13:11.430767 | orchestrator | Tuesday 03 February 2026 03:13:07 +0000 (0:00:02.069) 0:00:31.624 ****** 2026-02-03 03:13:11.430774 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 03:13:11.430782 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 03:13:11.430789 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 03:13:11.430804 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 03:13:11.430813 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 03:13:11.430822 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 03:13:11.430830 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 03:13:11.430839 | orchestrator | 2026-02-03 03:13:11.430848 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-03 03:13:11.430856 | orchestrator | Tuesday 03 February 2026 03:13:09 +0000 (0:00:01.743) 0:00:33.368 ****** 2026-02-03 03:13:11.430865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:11.430883 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:12.008986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:12.009068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:12.009106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:12.009122 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:12.009127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 03:13:12.009133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:12.009138 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:12.009155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:12.009161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:12.009172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:12.009177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:12.009181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:12.009187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:12.009193 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:13:12.009204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:14:32.817186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:14:32.817305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:14:32.817314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:14:32.817334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:14:32.817340 | orchestrator | 2026-02-03 03:14:32.817348 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-03 03:14:32.817356 | orchestrator | Tuesday 03 February 2026 03:13:11 +0000 (0:00:02.645) 0:00:36.014 ****** 2026-02-03 03:14:32.817362 | orchestrator | changed: [testbed-manager] 2026-02-03 03:14:32.817370 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:14:32.817376 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:14:32.817382 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:14:32.817389 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:14:32.817395 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:14:32.817401 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:14:32.817408 | orchestrator | 2026-02-03 03:14:32.817415 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-03 03:14:32.817487 | orchestrator | Tuesday 03 February 2026 03:13:13 +0000 (0:00:01.466) 0:00:37.481 ****** 2026-02-03 03:14:32.817495 | orchestrator | changed: [testbed-manager] 2026-02-03 03:14:32.817501 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:14:32.817507 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:14:32.817512 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:14:32.817518 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:14:32.817525 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:14:32.817531 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:14:32.817537 | orchestrator | 2026-02-03 03:14:32.817543 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 03:14:32.817549 | orchestrator | Tuesday 03 February 2026 03:13:14 +0000 (0:00:01.110) 0:00:38.591 ****** 2026-02-03 03:14:32.817555 | orchestrator | 2026-02-03 03:14:32.817561 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 03:14:32.817568 | orchestrator | Tuesday 03 February 2026 03:13:14 +0000 (0:00:00.065) 0:00:38.656 ****** 2026-02-03 03:14:32.817573 | orchestrator | 2026-02-03 03:14:32.817579 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 03:14:32.817586 | orchestrator | Tuesday 03 February 2026 03:13:14 +0000 (0:00:00.065) 0:00:38.721 ****** 2026-02-03 03:14:32.817592 | orchestrator | 2026-02-03 03:14:32.817598 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 03:14:32.817604 | orchestrator | Tuesday 03 February 2026 03:13:14 +0000 (0:00:00.067) 0:00:38.788 ****** 2026-02-03 03:14:32.817610 | orchestrator | 2026-02-03 03:14:32.817616 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 03:14:32.817630 | orchestrator | Tuesday 03 February 2026 03:13:14 +0000 (0:00:00.229) 0:00:39.018 ****** 2026-02-03 03:14:32.817636 | orchestrator | 2026-02-03 03:14:32.817642 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 03:14:32.817648 | orchestrator | Tuesday 03 February 2026 03:13:15 +0000 (0:00:00.076) 0:00:39.094 ****** 2026-02-03 03:14:32.817654 | orchestrator | 2026-02-03 03:14:32.817660 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 03:14:32.817666 | orchestrator | Tuesday 03 February 2026 03:13:15 +0000 (0:00:00.067) 0:00:39.162 ****** 2026-02-03 03:14:32.817673 | orchestrator | 2026-02-03 03:14:32.817683 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-03 03:14:32.817689 | orchestrator | Tuesday 03 February 2026 03:13:15 +0000 (0:00:00.091) 0:00:39.254 ****** 2026-02-03 03:14:32.817695 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:14:32.817701 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:14:32.817707 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:14:32.817713 | orchestrator | changed: [testbed-manager] 2026-02-03 03:14:32.817719 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:14:32.817740 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:14:32.817746 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:14:32.817752 | orchestrator | 2026-02-03 03:14:32.817759 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-03 03:14:32.817765 | orchestrator | Tuesday 03 February 2026 03:13:50 +0000 (0:00:35.104) 0:01:14.358 ****** 2026-02-03 03:14:32.817772 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:14:32.817779 | orchestrator | changed: [testbed-manager] 2026-02-03 03:14:32.817785 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:14:32.817791 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:14:32.817797 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:14:32.817804 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:14:32.817810 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:14:32.817816 | orchestrator | 2026-02-03 03:14:32.817823 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-03 03:14:32.817829 | orchestrator | Tuesday 03 February 2026 03:14:22 +0000 (0:00:31.868) 0:01:46.226 ****** 2026-02-03 03:14:32.817835 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:14:32.817843 | orchestrator | ok: [testbed-manager] 2026-02-03 03:14:32.817850 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:14:32.817856 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:14:32.817863 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:14:32.817870 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:14:32.817876 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:14:32.817883 | orchestrator | 2026-02-03 03:14:32.817889 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-03 03:14:32.817896 | orchestrator | Tuesday 03 February 2026 03:14:24 +0000 (0:00:01.979) 0:01:48.205 ****** 2026-02-03 03:14:32.817902 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:14:32.817908 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:14:32.817914 | orchestrator | changed: [testbed-manager] 2026-02-03 03:14:32.817921 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:14:32.817927 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:14:32.817933 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:14:32.817940 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:14:32.817946 | orchestrator | 2026-02-03 03:14:32.817952 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:14:32.817960 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 03:14:32.817968 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 03:14:32.817981 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 03:14:32.817993 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 03:14:32.818000 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 03:14:32.818006 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 03:14:32.818012 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 03:14:32.818087 | orchestrator | 2026-02-03 03:14:32.818095 | orchestrator | 2026-02-03 03:14:32.818115 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:14:32.818122 | orchestrator | Tuesday 03 February 2026 03:14:32 +0000 (0:00:08.597) 0:01:56.803 ****** 2026-02-03 03:14:32.818129 | orchestrator | =============================================================================== 2026-02-03 03:14:32.818136 | orchestrator | common : Restart fluentd container ------------------------------------- 35.10s 2026-02-03 03:14:32.818143 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.87s 2026-02-03 03:14:32.818150 | orchestrator | common : Restart cron container ----------------------------------------- 8.60s 2026-02-03 03:14:32.818156 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.76s 2026-02-03 03:14:32.818163 | orchestrator | common : Copying over config.json files for services -------------------- 3.49s 2026-02-03 03:14:32.818169 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.69s 2026-02-03 03:14:32.818175 | orchestrator | common : Check common containers ---------------------------------------- 2.65s 2026-02-03 03:14:32.818182 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.60s 2026-02-03 03:14:32.818188 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.16s 2026-02-03 03:14:32.818194 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.07s 2026-02-03 03:14:32.818200 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.01s 2026-02-03 03:14:32.818206 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.98s 2026-02-03 03:14:32.818212 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.76s 2026-02-03 03:14:32.818218 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.74s 2026-02-03 03:14:32.818224 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.60s 2026-02-03 03:14:32.818230 | orchestrator | common : Creating log volume -------------------------------------------- 1.47s 2026-02-03 03:14:32.818243 | orchestrator | common : include_tasks -------------------------------------------------- 1.37s 2026-02-03 03:14:33.244671 | orchestrator | common : include_tasks -------------------------------------------------- 1.35s 2026-02-03 03:14:33.244743 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.26s 2026-02-03 03:14:33.244750 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.11s 2026-02-03 03:14:35.687027 | orchestrator | 2026-02-03 03:14:35 | INFO  | Task e566217f-b78a-4c3f-9203-bd09cd46ea96 (loadbalancer) was prepared for execution. 2026-02-03 03:14:35.687113 | orchestrator | 2026-02-03 03:14:35 | INFO  | It takes a moment until task e566217f-b78a-4c3f-9203-bd09cd46ea96 (loadbalancer) has been started and output is visible here. 2026-02-03 03:14:50.780965 | orchestrator | 2026-02-03 03:14:50.781055 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:14:50.781065 | orchestrator | 2026-02-03 03:14:50.781071 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:14:50.781078 | orchestrator | Tuesday 03 February 2026 03:14:40 +0000 (0:00:00.251) 0:00:00.251 ****** 2026-02-03 03:14:50.781102 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:14:50.781110 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:14:50.781115 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:14:50.781121 | orchestrator | 2026-02-03 03:14:50.781126 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:14:50.781132 | orchestrator | Tuesday 03 February 2026 03:14:40 +0000 (0:00:00.319) 0:00:00.570 ****** 2026-02-03 03:14:50.781139 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-03 03:14:50.781145 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-03 03:14:50.781150 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-03 03:14:50.781156 | orchestrator | 2026-02-03 03:14:50.781161 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-03 03:14:50.781167 | orchestrator | 2026-02-03 03:14:50.781173 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-03 03:14:50.781190 | orchestrator | Tuesday 03 February 2026 03:14:40 +0000 (0:00:00.463) 0:00:01.033 ****** 2026-02-03 03:14:50.781196 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:14:50.781221 | orchestrator | 2026-02-03 03:14:50.781235 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-03 03:14:50.781240 | orchestrator | Tuesday 03 February 2026 03:14:41 +0000 (0:00:00.541) 0:00:01.575 ****** 2026-02-03 03:14:50.781246 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:14:50.781251 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:14:50.781259 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:14:50.781284 | orchestrator | 2026-02-03 03:14:50.781293 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-03 03:14:50.781303 | orchestrator | Tuesday 03 February 2026 03:14:41 +0000 (0:00:00.598) 0:00:02.173 ****** 2026-02-03 03:14:50.781309 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:14:50.781322 | orchestrator | 2026-02-03 03:14:50.781328 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-03 03:14:50.781339 | orchestrator | Tuesday 03 February 2026 03:14:42 +0000 (0:00:00.708) 0:00:02.882 ****** 2026-02-03 03:14:50.781345 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:14:50.781351 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:14:50.781356 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:14:50.781361 | orchestrator | 2026-02-03 03:14:50.781367 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-03 03:14:50.781372 | orchestrator | Tuesday 03 February 2026 03:14:43 +0000 (0:00:00.632) 0:00:03.514 ****** 2026-02-03 03:14:50.781378 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-03 03:14:50.781384 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-03 03:14:50.781389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-03 03:14:50.781395 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-03 03:14:50.781400 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-03 03:14:50.781405 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-03 03:14:50.781411 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-03 03:14:50.781418 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-03 03:14:50.781423 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-03 03:14:50.781429 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-03 03:14:50.781474 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-03 03:14:50.781480 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-03 03:14:50.781486 | orchestrator | 2026-02-03 03:14:50.781491 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-03 03:14:50.781497 | orchestrator | Tuesday 03 February 2026 03:14:46 +0000 (0:00:03.144) 0:00:06.659 ****** 2026-02-03 03:14:50.781502 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-03 03:14:50.781508 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-03 03:14:50.781514 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-03 03:14:50.781519 | orchestrator | 2026-02-03 03:14:50.781526 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-03 03:14:50.781533 | orchestrator | Tuesday 03 February 2026 03:14:47 +0000 (0:00:00.717) 0:00:07.376 ****** 2026-02-03 03:14:50.781539 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-03 03:14:50.781553 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-03 03:14:50.781559 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-03 03:14:50.781565 | orchestrator | 2026-02-03 03:14:50.781572 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-03 03:14:50.781579 | orchestrator | Tuesday 03 February 2026 03:14:48 +0000 (0:00:01.256) 0:00:08.632 ****** 2026-02-03 03:14:50.781592 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-03 03:14:50.781599 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:14:50.781657 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-03 03:14:50.781666 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:14:50.781672 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-03 03:14:50.781679 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:14:50.781685 | orchestrator | 2026-02-03 03:14:50.781692 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-03 03:14:50.781697 | orchestrator | Tuesday 03 February 2026 03:14:48 +0000 (0:00:00.525) 0:00:09.157 ****** 2026-02-03 03:14:50.781709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 03:14:50.781719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 03:14:50.781724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 03:14:50.781735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:14:50.781741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:14:50.781751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:14:55.960625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:14:55.960718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:14:55.960727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:14:55.960733 | orchestrator | 2026-02-03 03:14:55.960739 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-03 03:14:55.960745 | orchestrator | Tuesday 03 February 2026 03:14:50 +0000 (0:00:01.847) 0:00:11.005 ****** 2026-02-03 03:14:55.960751 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:14:55.960771 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:14:55.960776 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:14:55.960782 | orchestrator | 2026-02-03 03:14:55.960787 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-03 03:14:55.960791 | orchestrator | Tuesday 03 February 2026 03:14:51 +0000 (0:00:00.870) 0:00:11.875 ****** 2026-02-03 03:14:55.960797 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-03 03:14:55.960802 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-03 03:14:55.960806 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-03 03:14:55.960811 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-03 03:14:55.960816 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-03 03:14:55.960820 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-03 03:14:55.960825 | orchestrator | 2026-02-03 03:14:55.960829 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-03 03:14:55.960834 | orchestrator | Tuesday 03 February 2026 03:14:53 +0000 (0:00:01.460) 0:00:13.336 ****** 2026-02-03 03:14:55.960838 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:14:55.960843 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:14:55.960847 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:14:55.960852 | orchestrator | 2026-02-03 03:14:55.960857 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-03 03:14:55.960861 | orchestrator | Tuesday 03 February 2026 03:14:53 +0000 (0:00:00.887) 0:00:14.223 ****** 2026-02-03 03:14:55.960866 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:14:55.960871 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:14:55.960875 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:14:55.960880 | orchestrator | 2026-02-03 03:14:55.960884 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-03 03:14:55.960889 | orchestrator | Tuesday 03 February 2026 03:14:55 +0000 (0:00:01.397) 0:00:15.621 ****** 2026-02-03 03:14:55.960894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 03:14:55.960911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:14:55.960916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:14:55.960922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 03:14:55.960931 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:14:55.960936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 03:14:55.960965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:14:55.960971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:14:55.960976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 03:14:55.960981 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:14:55.960990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 03:14:58.835776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:14:58.835874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:14:58.835882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 03:14:58.835888 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:14:58.835895 | orchestrator | 2026-02-03 03:14:58.835901 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-03 03:14:58.835906 | orchestrator | Tuesday 03 February 2026 03:14:55 +0000 (0:00:00.572) 0:00:16.194 ****** 2026-02-03 03:14:58.835911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 03:14:58.835917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 03:14:58.835921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 03:14:58.835953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:14:58.835959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:14:58.835964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:14:58.835968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:14:58.835973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 03:14:58.835979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 03:14:58.836002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:07.217007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:07.217101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4', '__omit_place_holder__33479b22d22f454857fc0dbdc6cb9f8018c9bbb4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 03:15:07.217110 | orchestrator | 2026-02-03 03:15:07.217117 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-03 03:15:07.217123 | orchestrator | Tuesday 03 February 2026 03:14:58 +0000 (0:00:02.873) 0:00:19.067 ****** 2026-02-03 03:15:07.217127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 03:15:07.217132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 03:15:07.217137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 03:15:07.217156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:07.217183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:07.217188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:07.217192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:15:07.217197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:15:07.217201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:15:07.217205 | orchestrator | 2026-02-03 03:15:07.217209 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-03 03:15:07.217213 | orchestrator | Tuesday 03 February 2026 03:15:01 +0000 (0:00:03.174) 0:00:22.242 ****** 2026-02-03 03:15:07.217223 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-03 03:15:07.217228 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-03 03:15:07.217232 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-03 03:15:07.217236 | orchestrator | 2026-02-03 03:15:07.217240 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-03 03:15:07.217244 | orchestrator | Tuesday 03 February 2026 03:15:03 +0000 (0:00:01.913) 0:00:24.155 ****** 2026-02-03 03:15:07.217248 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-03 03:15:07.217252 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-03 03:15:07.217256 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-03 03:15:07.217259 | orchestrator | 2026-02-03 03:15:07.217263 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-03 03:15:07.217267 | orchestrator | Tuesday 03 February 2026 03:15:06 +0000 (0:00:02.758) 0:00:26.914 ****** 2026-02-03 03:15:07.217271 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:07.217277 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:07.217281 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:07.217285 | orchestrator | 2026-02-03 03:15:07.217293 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-03 03:15:18.835977 | orchestrator | Tuesday 03 February 2026 03:15:07 +0000 (0:00:00.541) 0:00:27.456 ****** 2026-02-03 03:15:18.836057 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-03 03:15:18.836077 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-03 03:15:18.836085 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-03 03:15:18.836093 | orchestrator | 2026-02-03 03:15:18.836101 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-03 03:15:18.836109 | orchestrator | Tuesday 03 February 2026 03:15:09 +0000 (0:00:02.070) 0:00:29.526 ****** 2026-02-03 03:15:18.836119 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-03 03:15:18.836132 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-03 03:15:18.836144 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-03 03:15:18.836157 | orchestrator | 2026-02-03 03:15:18.836170 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-03 03:15:18.836183 | orchestrator | Tuesday 03 February 2026 03:15:11 +0000 (0:00:02.075) 0:00:31.601 ****** 2026-02-03 03:15:18.836196 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-03 03:15:18.836207 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-03 03:15:18.836215 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-03 03:15:18.836222 | orchestrator | 2026-02-03 03:15:18.836238 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-03 03:15:18.836246 | orchestrator | Tuesday 03 February 2026 03:15:12 +0000 (0:00:01.534) 0:00:33.136 ****** 2026-02-03 03:15:18.836254 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-03 03:15:18.836261 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-03 03:15:18.836268 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-03 03:15:18.836276 | orchestrator | 2026-02-03 03:15:18.836299 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-03 03:15:18.836306 | orchestrator | Tuesday 03 February 2026 03:15:14 +0000 (0:00:01.416) 0:00:34.553 ****** 2026-02-03 03:15:18.836314 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:15:18.836321 | orchestrator | 2026-02-03 03:15:18.836328 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-03 03:15:18.836335 | orchestrator | Tuesday 03 February 2026 03:15:14 +0000 (0:00:00.548) 0:00:35.101 ****** 2026-02-03 03:15:18.836344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 03:15:18.836354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 03:15:18.836366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 03:15:18.836387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:18.836396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:18.836422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:18.836438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:15:18.836446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:15:18.836453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:15:18.836591 | orchestrator | 2026-02-03 03:15:18.836601 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-03 03:15:18.836610 | orchestrator | Tuesday 03 February 2026 03:15:18 +0000 (0:00:03.358) 0:00:38.459 ****** 2026-02-03 03:15:18.836631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 03:15:19.623798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:19.623912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:19.623964 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:19.623986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 03:15:19.624005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:19.624022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:19.624038 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:19.624056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 03:15:19.624113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:19.624134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:19.624162 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:19.624180 | orchestrator | 2026-02-03 03:15:19.624198 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-03 03:15:19.624217 | orchestrator | Tuesday 03 February 2026 03:15:18 +0000 (0:00:00.614) 0:00:39.074 ****** 2026-02-03 03:15:19.624235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 03:15:19.624253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:19.624271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:19.624288 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:19.624305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 03:15:19.624340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:20.481283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:20.481349 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:20.481356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 03:15:20.481362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:20.481366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:20.481370 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:20.481374 | orchestrator | 2026-02-03 03:15:20.481379 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-03 03:15:20.481384 | orchestrator | Tuesday 03 February 2026 03:15:19 +0000 (0:00:00.786) 0:00:39.860 ****** 2026-02-03 03:15:20.481388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 03:15:20.481392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:20.481405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:20.481413 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:20.481417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 03:15:20.481421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:20.481425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:20.481429 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:20.481433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 03:15:20.481446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:20.481452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:20.481462 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:21.891120 | orchestrator | 2026-02-03 03:15:21.891231 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-03 03:15:21.891250 | orchestrator | Tuesday 03 February 2026 03:15:20 +0000 (0:00:00.853) 0:00:40.715 ****** 2026-02-03 03:15:21.891266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 03:15:21.891280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:21.891294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:21.891306 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:21.891318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 03:15:21.891329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:21.891385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:21.891420 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:21.891454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 03:15:21.891565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:21.891581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:21.891592 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:21.891604 | orchestrator | 2026-02-03 03:15:21.891615 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-03 03:15:21.891626 | orchestrator | Tuesday 03 February 2026 03:15:21 +0000 (0:00:00.580) 0:00:41.296 ****** 2026-02-03 03:15:21.891637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 03:15:21.891648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:21.891682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:21.891695 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:21.891716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 03:15:22.950904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:22.951004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:22.951017 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:22.951028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 03:15:22.951034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:22.951040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:22.951063 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:22.951068 | orchestrator | 2026-02-03 03:15:22.951073 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-03 03:15:22.951079 | orchestrator | Tuesday 03 February 2026 03:15:21 +0000 (0:00:00.831) 0:00:42.127 ****** 2026-02-03 03:15:22.951094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 03:15:22.951112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:22.951117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:22.951122 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:22.951127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 03:15:22.951131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:22.951140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:22.951144 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:22.951152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 03:15:22.951159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:24.414766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:24.414843 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:24.414853 | orchestrator | 2026-02-03 03:15:24.414860 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-03 03:15:24.414866 | orchestrator | Tuesday 03 February 2026 03:15:22 +0000 (0:00:01.056) 0:00:43.184 ****** 2026-02-03 03:15:24.414873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 03:15:24.414880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:24.414903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:24.414909 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:24.414914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 03:15:24.414929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:24.414947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:24.414952 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:24.414957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 03:15:24.414962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:24.414973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:24.414978 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:24.414982 | orchestrator | 2026-02-03 03:15:24.414987 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-03 03:15:24.414992 | orchestrator | Tuesday 03 February 2026 03:15:23 +0000 (0:00:00.589) 0:00:43.773 ****** 2026-02-03 03:15:24.414997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 03:15:24.415002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:24.415015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:31.226769 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:31.226869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 03:15:31.226883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:31.226916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:31.226927 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:31.226936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 03:15:31.226960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 03:15:31.226970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 03:15:31.226979 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:31.226988 | orchestrator | 2026-02-03 03:15:31.226999 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-03 03:15:31.227010 | orchestrator | Tuesday 03 February 2026 03:15:24 +0000 (0:00:00.878) 0:00:44.651 ****** 2026-02-03 03:15:31.227019 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-03 03:15:31.227044 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-03 03:15:31.227054 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-03 03:15:31.227063 | orchestrator | 2026-02-03 03:15:31.227071 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-03 03:15:31.227081 | orchestrator | Tuesday 03 February 2026 03:15:26 +0000 (0:00:01.735) 0:00:46.387 ****** 2026-02-03 03:15:31.227090 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-03 03:15:31.227099 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-03 03:15:31.227108 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-03 03:15:31.227117 | orchestrator | 2026-02-03 03:15:31.227134 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-03 03:15:31.227143 | orchestrator | Tuesday 03 February 2026 03:15:27 +0000 (0:00:01.710) 0:00:48.098 ****** 2026-02-03 03:15:31.227151 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-03 03:15:31.227160 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-03 03:15:31.227169 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-03 03:15:31.227178 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:31.227186 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-03 03:15:31.227195 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-03 03:15:31.227204 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:31.227212 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-03 03:15:31.227221 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:31.227230 | orchestrator | 2026-02-03 03:15:31.227239 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-03 03:15:31.227247 | orchestrator | Tuesday 03 February 2026 03:15:28 +0000 (0:00:00.867) 0:00:48.965 ****** 2026-02-03 03:15:31.227257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 03:15:31.227267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 03:15:31.227280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 03:15:31.227296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:35.534738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:35.534833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 03:15:35.534843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:15:35.534849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:15:35.534854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 03:15:35.534858 | orchestrator | 2026-02-03 03:15:35.534880 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-03 03:15:35.534893 | orchestrator | Tuesday 03 February 2026 03:15:31 +0000 (0:00:02.502) 0:00:51.468 ****** 2026-02-03 03:15:35.534899 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:15:35.534904 | orchestrator | 2026-02-03 03:15:35.534909 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-03 03:15:35.534913 | orchestrator | Tuesday 03 February 2026 03:15:32 +0000 (0:00:00.816) 0:00:52.284 ****** 2026-02-03 03:15:35.534930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 03:15:35.534952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 03:15:35.534957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:35.534962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 03:15:35.534966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 03:15:35.534974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 03:15:35.534979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:35.534995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 03:15:36.166284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 03:15:36.166368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 03:15:36.166390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:36.166418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 03:15:36.166426 | orchestrator | 2026-02-03 03:15:36.166434 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-03 03:15:36.166441 | orchestrator | Tuesday 03 February 2026 03:15:35 +0000 (0:00:03.481) 0:00:55.765 ****** 2026-02-03 03:15:36.166449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 03:15:36.166517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 03:15:36.166525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:36.166531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 03:15:36.166537 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:36.166544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 03:15:36.166554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 03:15:36.166565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:36.166571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 03:15:36.166576 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:36.166588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 03:15:44.715775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 03:15:44.715882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:44.715899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 03:15:44.715932 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:44.715946 | orchestrator | 2026-02-03 03:15:44.715957 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-03 03:15:44.715969 | orchestrator | Tuesday 03 February 2026 03:15:36 +0000 (0:00:00.638) 0:00:56.404 ****** 2026-02-03 03:15:44.715979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-03 03:15:44.715992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-03 03:15:44.716004 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:44.716028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-03 03:15:44.716039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-03 03:15:44.716049 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:44.716058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-03 03:15:44.716068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-03 03:15:44.716078 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:44.716088 | orchestrator | 2026-02-03 03:15:44.716098 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-03 03:15:44.716108 | orchestrator | Tuesday 03 February 2026 03:15:37 +0000 (0:00:01.118) 0:00:57.523 ****** 2026-02-03 03:15:44.716118 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:15:44.716128 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:15:44.716137 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:15:44.716147 | orchestrator | 2026-02-03 03:15:44.716158 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-03 03:15:44.716168 | orchestrator | Tuesday 03 February 2026 03:15:38 +0000 (0:00:01.337) 0:00:58.860 ****** 2026-02-03 03:15:44.716178 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:15:44.716188 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:15:44.716197 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:15:44.716207 | orchestrator | 2026-02-03 03:15:44.716217 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-03 03:15:44.716227 | orchestrator | Tuesday 03 February 2026 03:15:40 +0000 (0:00:02.049) 0:01:00.910 ****** 2026-02-03 03:15:44.716237 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:15:44.716247 | orchestrator | 2026-02-03 03:15:44.716274 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-03 03:15:44.716285 | orchestrator | Tuesday 03 February 2026 03:15:41 +0000 (0:00:00.618) 0:01:01.528 ****** 2026-02-03 03:15:44.716298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 03:15:44.716327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:44.716342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:15:44.716355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 03:15:44.716367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:44.716386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:15:45.339779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 03:15:45.339873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:45.339885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:15:45.339894 | orchestrator | 2026-02-03 03:15:45.339903 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-03 03:15:45.339911 | orchestrator | Tuesday 03 February 2026 03:15:44 +0000 (0:00:03.422) 0:01:04.951 ****** 2026-02-03 03:15:45.339920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 03:15:45.339928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:45.339965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:15:45.339974 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:45.339986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 03:15:45.339994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:45.340001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:15:45.340008 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:45.340015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 03:15:45.340032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 03:15:55.158282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:15:55.159221 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:55.159253 | orchestrator | 2026-02-03 03:15:55.159262 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-03 03:15:55.159271 | orchestrator | Tuesday 03 February 2026 03:15:45 +0000 (0:00:00.621) 0:01:05.573 ****** 2026-02-03 03:15:55.159293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-03 03:15:55.159303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-03 03:15:55.159312 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:55.159319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-03 03:15:55.159325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-03 03:15:55.159332 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:55.159339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-03 03:15:55.159346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-03 03:15:55.159352 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:55.159358 | orchestrator | 2026-02-03 03:15:55.159365 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-03 03:15:55.159372 | orchestrator | Tuesday 03 February 2026 03:15:46 +0000 (0:00:00.853) 0:01:06.426 ****** 2026-02-03 03:15:55.159379 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:15:55.159386 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:15:55.159392 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:15:55.159399 | orchestrator | 2026-02-03 03:15:55.159405 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-03 03:15:55.159411 | orchestrator | Tuesday 03 February 2026 03:15:47 +0000 (0:00:01.632) 0:01:08.059 ****** 2026-02-03 03:15:55.159437 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:15:55.159444 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:15:55.159450 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:15:55.159457 | orchestrator | 2026-02-03 03:15:55.159463 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-03 03:15:55.159469 | orchestrator | Tuesday 03 February 2026 03:15:49 +0000 (0:00:02.016) 0:01:10.075 ****** 2026-02-03 03:15:55.159475 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:55.159481 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:55.159488 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:15:55.159508 | orchestrator | 2026-02-03 03:15:55.159515 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-03 03:15:55.159521 | orchestrator | Tuesday 03 February 2026 03:15:50 +0000 (0:00:00.307) 0:01:10.382 ****** 2026-02-03 03:15:55.159527 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:15:55.159534 | orchestrator | 2026-02-03 03:15:55.159540 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-03 03:15:55.159546 | orchestrator | Tuesday 03 February 2026 03:15:50 +0000 (0:00:00.638) 0:01:11.021 ****** 2026-02-03 03:15:55.159574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-03 03:15:55.159588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-03 03:15:55.159595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-03 03:15:55.159601 | orchestrator | 2026-02-03 03:15:55.159608 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-03 03:15:55.159616 | orchestrator | Tuesday 03 February 2026 03:15:53 +0000 (0:00:03.015) 0:01:14.036 ****** 2026-02-03 03:15:55.159628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-03 03:15:55.159635 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:15:55.159642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-03 03:15:55.159648 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:15:55.159659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-03 03:16:03.082089 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:03.082191 | orchestrator | 2026-02-03 03:16:03.082207 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-03 03:16:03.082218 | orchestrator | Tuesday 03 February 2026 03:15:55 +0000 (0:00:01.357) 0:01:15.393 ****** 2026-02-03 03:16:03.082246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 03:16:03.082258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 03:16:03.082266 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:03.082271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 03:16:03.082293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 03:16:03.082299 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:03.082304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 03:16:03.082309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 03:16:03.082315 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:03.082320 | orchestrator | 2026-02-03 03:16:03.082325 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-03 03:16:03.082331 | orchestrator | Tuesday 03 February 2026 03:15:56 +0000 (0:00:01.814) 0:01:17.208 ****** 2026-02-03 03:16:03.082336 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:03.082341 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:03.082346 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:03.082351 | orchestrator | 2026-02-03 03:16:03.082360 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-03 03:16:03.082365 | orchestrator | Tuesday 03 February 2026 03:15:57 +0000 (0:00:00.436) 0:01:17.644 ****** 2026-02-03 03:16:03.082370 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:03.082375 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:03.082381 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:03.082386 | orchestrator | 2026-02-03 03:16:03.082391 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-03 03:16:03.082397 | orchestrator | Tuesday 03 February 2026 03:15:58 +0000 (0:00:01.322) 0:01:18.967 ****** 2026-02-03 03:16:03.082406 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:16:03.082414 | orchestrator | 2026-02-03 03:16:03.082423 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-03 03:16:03.082431 | orchestrator | Tuesday 03 February 2026 03:15:59 +0000 (0:00:00.961) 0:01:19.929 ****** 2026-02-03 03:16:03.082465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 03:16:03.082480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.082487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.082494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.082499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 03:16:03.082560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.818251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.818350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.818362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 03:16:03.818370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.818378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.818403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.818417 | orchestrator | 2026-02-03 03:16:03.818426 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-03 03:16:03.818434 | orchestrator | Tuesday 03 February 2026 03:16:03 +0000 (0:00:03.475) 0:01:23.404 ****** 2026-02-03 03:16:03.818442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 03:16:03.818449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.818457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.818464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 03:16:03.818471 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:03.818489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 03:16:10.326354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:16:10.326481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 03:16:10.326497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 03:16:10.326507 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:10.326824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 03:16:10.326846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:16:10.326937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 03:16:10.326948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 03:16:10.326955 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:10.326962 | orchestrator | 2026-02-03 03:16:10.326970 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-03 03:16:10.326978 | orchestrator | Tuesday 03 February 2026 03:16:03 +0000 (0:00:00.752) 0:01:24.156 ****** 2026-02-03 03:16:10.326985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-03 03:16:10.326992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-03 03:16:10.326999 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:10.327004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-03 03:16:10.327010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-03 03:16:10.327016 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:10.327023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-03 03:16:10.327029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-03 03:16:10.327035 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:10.327041 | orchestrator | 2026-02-03 03:16:10.327047 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-03 03:16:10.327052 | orchestrator | Tuesday 03 February 2026 03:16:05 +0000 (0:00:01.327) 0:01:25.483 ****** 2026-02-03 03:16:10.327058 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:10.327072 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:10.327078 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:10.327084 | orchestrator | 2026-02-03 03:16:10.327089 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-03 03:16:10.327095 | orchestrator | Tuesday 03 February 2026 03:16:06 +0000 (0:00:01.341) 0:01:26.824 ****** 2026-02-03 03:16:10.327101 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:10.327107 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:10.327114 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:10.327120 | orchestrator | 2026-02-03 03:16:10.327126 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-03 03:16:10.327131 | orchestrator | Tuesday 03 February 2026 03:16:08 +0000 (0:00:02.067) 0:01:28.892 ****** 2026-02-03 03:16:10.327137 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:10.327143 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:10.327149 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:10.327155 | orchestrator | 2026-02-03 03:16:10.327161 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-03 03:16:10.327167 | orchestrator | Tuesday 03 February 2026 03:16:08 +0000 (0:00:00.322) 0:01:29.215 ****** 2026-02-03 03:16:10.327172 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:10.327178 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:10.327184 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:10.327191 | orchestrator | 2026-02-03 03:16:10.327197 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-03 03:16:10.327203 | orchestrator | Tuesday 03 February 2026 03:16:09 +0000 (0:00:00.328) 0:01:29.544 ****** 2026-02-03 03:16:10.327209 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:16:10.327215 | orchestrator | 2026-02-03 03:16:10.327222 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-03 03:16:10.327234 | orchestrator | Tuesday 03 February 2026 03:16:10 +0000 (0:00:01.019) 0:01:30.564 ****** 2026-02-03 03:16:13.582274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 03:16:13.583149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 03:16:13.583188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 03:16:13.583225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 03:16:13.583238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 03:16:13.583287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 03:16:13.583301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:16:13.583313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 03:16:13.583325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 03:16:13.583345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 03:16:13.583357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 03:16:13.583380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 03:16:14.447270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447385 | orchestrator | 2026-02-03 03:16:14.447393 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-03 03:16:14.447401 | orchestrator | Tuesday 03 February 2026 03:16:13 +0000 (0:00:03.468) 0:01:34.032 ****** 2026-02-03 03:16:14.447408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 03:16:14.447415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 03:16:14.447422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.447435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.892248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.892343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.892374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.892381 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:14.892391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 03:16:14.892399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 03:16:14.892785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.892827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.892834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.892848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.892856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 03:16:14.892861 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:14.892866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 03:16:14.892872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 03:16:14.892884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 03:16:24.859667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 03:16:24.860731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 03:16:24.860806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:16:24.860828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 03:16:24.860846 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:24.860867 | orchestrator | 2026-02-03 03:16:24.860885 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-03 03:16:24.860902 | orchestrator | Tuesday 03 February 2026 03:16:14 +0000 (0:00:01.094) 0:01:35.126 ****** 2026-02-03 03:16:24.860919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-03 03:16:24.860938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-03 03:16:24.860957 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:24.860974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-03 03:16:24.860990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-03 03:16:24.861007 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:24.861022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-03 03:16:24.861069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-03 03:16:24.861088 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:24.861104 | orchestrator | 2026-02-03 03:16:24.861122 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-03 03:16:24.861165 | orchestrator | Tuesday 03 February 2026 03:16:16 +0000 (0:00:01.244) 0:01:36.371 ****** 2026-02-03 03:16:24.861183 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:24.861199 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:24.861215 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:24.861231 | orchestrator | 2026-02-03 03:16:24.861248 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-03 03:16:24.861265 | orchestrator | Tuesday 03 February 2026 03:16:17 +0000 (0:00:01.332) 0:01:37.703 ****** 2026-02-03 03:16:24.861281 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:24.861297 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:24.861315 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:24.861331 | orchestrator | 2026-02-03 03:16:24.861347 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-03 03:16:24.861363 | orchestrator | Tuesday 03 February 2026 03:16:19 +0000 (0:00:02.052) 0:01:39.756 ****** 2026-02-03 03:16:24.861381 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:24.861398 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:24.861415 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:24.861430 | orchestrator | 2026-02-03 03:16:24.861440 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-03 03:16:24.861450 | orchestrator | Tuesday 03 February 2026 03:16:19 +0000 (0:00:00.313) 0:01:40.069 ****** 2026-02-03 03:16:24.861460 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:16:24.861469 | orchestrator | 2026-02-03 03:16:24.861479 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-03 03:16:24.861488 | orchestrator | Tuesday 03 February 2026 03:16:20 +0000 (0:00:01.030) 0:01:41.100 ****** 2026-02-03 03:16:24.861510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 03:16:24.861565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 03:16:27.823149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 03:16:27.823232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 03:16:27.823274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 03:16:27.823283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 03:16:27.823294 | orchestrator | 2026-02-03 03:16:27.823302 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-03 03:16:27.823309 | orchestrator | Tuesday 03 February 2026 03:16:24 +0000 (0:00:04.123) 0:01:45.223 ****** 2026-02-03 03:16:27.823323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 03:16:27.924957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 03:16:27.925079 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:27.925102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 03:16:27.925155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 03:16:27.925181 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:27.925194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 03:16:27.925221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 03:16:39.786265 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:39.786434 | orchestrator | 2026-02-03 03:16:39.786455 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-03 03:16:39.786468 | orchestrator | Tuesday 03 February 2026 03:16:27 +0000 (0:00:02.941) 0:01:48.165 ****** 2026-02-03 03:16:39.786482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 03:16:39.786499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 03:16:39.786512 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:39.786524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 03:16:39.786598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 03:16:39.786613 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:39.786625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 03:16:39.786655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 03:16:39.786667 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:39.786679 | orchestrator | 2026-02-03 03:16:39.786691 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-03 03:16:39.786703 | orchestrator | Tuesday 03 February 2026 03:16:31 +0000 (0:00:03.588) 0:01:51.753 ****** 2026-02-03 03:16:39.786738 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:39.786750 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:39.786762 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:39.786775 | orchestrator | 2026-02-03 03:16:39.786788 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-03 03:16:39.786802 | orchestrator | Tuesday 03 February 2026 03:16:32 +0000 (0:00:01.483) 0:01:53.237 ****** 2026-02-03 03:16:39.786815 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:39.786828 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:39.786842 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:39.786855 | orchestrator | 2026-02-03 03:16:39.786869 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-03 03:16:39.786900 | orchestrator | Tuesday 03 February 2026 03:16:35 +0000 (0:00:02.032) 0:01:55.270 ****** 2026-02-03 03:16:39.786913 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:39.786926 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:39.786939 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:39.786952 | orchestrator | 2026-02-03 03:16:39.786965 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-03 03:16:39.786978 | orchestrator | Tuesday 03 February 2026 03:16:35 +0000 (0:00:00.339) 0:01:55.610 ****** 2026-02-03 03:16:39.786992 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:16:39.787005 | orchestrator | 2026-02-03 03:16:39.787018 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-03 03:16:39.787031 | orchestrator | Tuesday 03 February 2026 03:16:36 +0000 (0:00:01.059) 0:01:56.669 ****** 2026-02-03 03:16:39.787046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 03:16:39.787063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 03:16:39.787077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 03:16:39.787090 | orchestrator | 2026-02-03 03:16:39.787104 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-03 03:16:39.787126 | orchestrator | Tuesday 03 February 2026 03:16:39 +0000 (0:00:02.956) 0:01:59.626 ****** 2026-02-03 03:16:39.787139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-03 03:16:39.787151 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:39.787170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-03 03:16:48.874448 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:48.874530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-03 03:16:48.874643 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:48.874655 | orchestrator | 2026-02-03 03:16:48.874663 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-03 03:16:48.874669 | orchestrator | Tuesday 03 February 2026 03:16:39 +0000 (0:00:00.395) 0:02:00.021 ****** 2026-02-03 03:16:48.874676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-03 03:16:48.874682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-03 03:16:48.874689 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:48.874694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-03 03:16:48.874699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-03 03:16:48.874703 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:48.874708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-03 03:16:48.874713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-03 03:16:48.874732 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:48.874737 | orchestrator | 2026-02-03 03:16:48.874742 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-03 03:16:48.874747 | orchestrator | Tuesday 03 February 2026 03:16:40 +0000 (0:00:00.878) 0:02:00.900 ****** 2026-02-03 03:16:48.874752 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:48.874756 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:48.874761 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:48.874765 | orchestrator | 2026-02-03 03:16:48.874770 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-03 03:16:48.874775 | orchestrator | Tuesday 03 February 2026 03:16:42 +0000 (0:00:01.397) 0:02:02.298 ****** 2026-02-03 03:16:48.874779 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:48.874784 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:48.874789 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:48.874793 | orchestrator | 2026-02-03 03:16:48.874798 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-03 03:16:48.874805 | orchestrator | Tuesday 03 February 2026 03:16:44 +0000 (0:00:02.066) 0:02:04.365 ****** 2026-02-03 03:16:48.874810 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:48.874815 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:48.874819 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:48.874824 | orchestrator | 2026-02-03 03:16:48.874829 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-03 03:16:48.874833 | orchestrator | Tuesday 03 February 2026 03:16:44 +0000 (0:00:00.343) 0:02:04.708 ****** 2026-02-03 03:16:48.874838 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:16:48.874843 | orchestrator | 2026-02-03 03:16:48.874847 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-03 03:16:48.874852 | orchestrator | Tuesday 03 February 2026 03:16:45 +0000 (0:00:01.131) 0:02:05.840 ****** 2026-02-03 03:16:48.874872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 03:16:48.874886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 03:16:48.874898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 03:16:50.462793 | orchestrator | 2026-02-03 03:16:50.463697 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-03 03:16:50.463738 | orchestrator | Tuesday 03 February 2026 03:16:48 +0000 (0:00:03.270) 0:02:09.110 ****** 2026-02-03 03:16:50.463779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 03:16:50.463799 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:50.463838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 03:16:50.463877 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:50.463898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 03:16:50.463909 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:50.463917 | orchestrator | 2026-02-03 03:16:50.463925 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-03 03:16:50.463933 | orchestrator | Tuesday 03 February 2026 03:16:49 +0000 (0:00:00.640) 0:02:09.750 ****** 2026-02-03 03:16:50.463941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-03 03:16:50.463957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 03:16:50.463967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-03 03:16:50.463983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 03:16:59.422457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-03 03:16:59.422640 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:59.422674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-03 03:16:59.422698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 03:16:59.422730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-03 03:16:59.422744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 03:16:59.422757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-03 03:16:59.422769 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:59.422780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-03 03:16:59.422792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 03:16:59.422803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-03 03:16:59.422839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 03:16:59.422851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-03 03:16:59.422862 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:59.422874 | orchestrator | 2026-02-03 03:16:59.422887 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-03 03:16:59.422900 | orchestrator | Tuesday 03 February 2026 03:16:50 +0000 (0:00:00.949) 0:02:10.700 ****** 2026-02-03 03:16:59.422911 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:59.422922 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:59.422935 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:59.422954 | orchestrator | 2026-02-03 03:16:59.422972 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-03 03:16:59.422991 | orchestrator | Tuesday 03 February 2026 03:16:52 +0000 (0:00:01.650) 0:02:12.350 ****** 2026-02-03 03:16:59.423011 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:16:59.423030 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:16:59.423050 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:16:59.423068 | orchestrator | 2026-02-03 03:16:59.423082 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-03 03:16:59.423095 | orchestrator | Tuesday 03 February 2026 03:16:54 +0000 (0:00:02.277) 0:02:14.628 ****** 2026-02-03 03:16:59.423107 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:59.423120 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:59.423153 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:59.423166 | orchestrator | 2026-02-03 03:16:59.423179 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-03 03:16:59.423192 | orchestrator | Tuesday 03 February 2026 03:16:54 +0000 (0:00:00.307) 0:02:14.936 ****** 2026-02-03 03:16:59.423205 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:16:59.423218 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:16:59.423230 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:16:59.423243 | orchestrator | 2026-02-03 03:16:59.423256 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-03 03:16:59.423269 | orchestrator | Tuesday 03 February 2026 03:16:55 +0000 (0:00:00.315) 0:02:15.252 ****** 2026-02-03 03:16:59.423281 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:16:59.423294 | orchestrator | 2026-02-03 03:16:59.423307 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-03 03:16:59.423325 | orchestrator | Tuesday 03 February 2026 03:16:56 +0000 (0:00:01.171) 0:02:16.423 ****** 2026-02-03 03:16:59.423358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:16:59.423397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:16:59.423421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:16:59.423444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:16:59.423467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:17:00.026750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:17:00.026850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:17:00.026884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:17:00.026894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:17:00.026904 | orchestrator | 2026-02-03 03:17:00.026916 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-03 03:17:00.026926 | orchestrator | Tuesday 03 February 2026 03:16:59 +0000 (0:00:03.236) 0:02:19.659 ****** 2026-02-03 03:17:00.026954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:17:00.026970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:17:00.026981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:17:00.026996 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:00.027008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:17:00.027018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:17:00.027027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:17:00.027037 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:00.027058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:17:09.349314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:17:09.425981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:17:09.426118 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:09.426130 | orchestrator | 2026-02-03 03:17:09.426138 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-03 03:17:09.426146 | orchestrator | Tuesday 03 February 2026 03:17:00 +0000 (0:00:00.598) 0:02:20.258 ****** 2026-02-03 03:17:09.426155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-03 03:17:09.426165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-03 03:17:09.426173 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:09.426179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-03 03:17:09.426186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-03 03:17:09.426192 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:09.426198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-03 03:17:09.426205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-03 03:17:09.426211 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:09.426217 | orchestrator | 2026-02-03 03:17:09.426223 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-03 03:17:09.426229 | orchestrator | Tuesday 03 February 2026 03:17:01 +0000 (0:00:01.066) 0:02:21.325 ****** 2026-02-03 03:17:09.426235 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:17:09.426241 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:17:09.426269 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:17:09.426275 | orchestrator | 2026-02-03 03:17:09.426281 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-03 03:17:09.426287 | orchestrator | Tuesday 03 February 2026 03:17:02 +0000 (0:00:01.333) 0:02:22.658 ****** 2026-02-03 03:17:09.426293 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:17:09.426299 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:17:09.426305 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:17:09.426310 | orchestrator | 2026-02-03 03:17:09.426316 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-03 03:17:09.426322 | orchestrator | Tuesday 03 February 2026 03:17:04 +0000 (0:00:02.125) 0:02:24.784 ****** 2026-02-03 03:17:09.426328 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:09.426344 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:09.426351 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:09.426356 | orchestrator | 2026-02-03 03:17:09.426363 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-03 03:17:09.426390 | orchestrator | Tuesday 03 February 2026 03:17:04 +0000 (0:00:00.329) 0:02:25.113 ****** 2026-02-03 03:17:09.426397 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:17:09.426403 | orchestrator | 2026-02-03 03:17:09.426409 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-03 03:17:09.426415 | orchestrator | Tuesday 03 February 2026 03:17:06 +0000 (0:00:01.279) 0:02:26.393 ****** 2026-02-03 03:17:09.426423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 03:17:09.426434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:17:09.426441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 03:17:09.426453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:17:09.426467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 03:17:14.651613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:17:14.651722 | orchestrator | 2026-02-03 03:17:14.651736 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-03 03:17:14.651746 | orchestrator | Tuesday 03 February 2026 03:17:09 +0000 (0:00:03.187) 0:02:29.580 ****** 2026-02-03 03:17:14.651758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 03:17:14.651813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:17:14.651840 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:14.651853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 03:17:14.651874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:17:14.651881 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:14.651887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 03:17:14.651894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:17:14.651905 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:14.651911 | orchestrator | 2026-02-03 03:17:14.651918 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-03 03:17:14.651924 | orchestrator | Tuesday 03 February 2026 03:17:09 +0000 (0:00:00.646) 0:02:30.227 ****** 2026-02-03 03:17:14.651931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-03 03:17:14.651939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-03 03:17:14.651947 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:14.651953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-03 03:17:14.651959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-03 03:17:14.651965 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:14.651971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-03 03:17:14.651977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-03 03:17:14.651983 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:14.651989 | orchestrator | 2026-02-03 03:17:14.651998 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-03 03:17:14.652004 | orchestrator | Tuesday 03 February 2026 03:17:10 +0000 (0:00:00.892) 0:02:31.120 ****** 2026-02-03 03:17:14.652010 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:17:14.652016 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:17:14.652021 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:17:14.652027 | orchestrator | 2026-02-03 03:17:14.652033 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-03 03:17:14.652039 | orchestrator | Tuesday 03 February 2026 03:17:12 +0000 (0:00:01.654) 0:02:32.774 ****** 2026-02-03 03:17:14.652045 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:17:14.652051 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:17:14.652057 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:17:14.652063 | orchestrator | 2026-02-03 03:17:14.652069 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-03 03:17:14.652078 | orchestrator | Tuesday 03 February 2026 03:17:14 +0000 (0:00:02.109) 0:02:34.884 ****** 2026-02-03 03:17:19.109829 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:17:19.109904 | orchestrator | 2026-02-03 03:17:19.109912 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-03 03:17:19.109917 | orchestrator | Tuesday 03 February 2026 03:17:15 +0000 (0:00:01.084) 0:02:35.969 ****** 2026-02-03 03:17:19.109925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 03:17:19.109951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:17:19.109958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 03:17:19.109964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 03:17:19.109980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 03:17:19.109999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:17:19.110004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 03:17:19.110049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 03:17:19.110055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 03:17:19.110060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:17:19.110068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 03:17:19.110078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 03:17:20.070788 | orchestrator | 2026-02-03 03:17:20.070867 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-03 03:17:20.070876 | orchestrator | Tuesday 03 February 2026 03:17:19 +0000 (0:00:03.460) 0:02:39.430 ****** 2026-02-03 03:17:20.070900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 03:17:20.070908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:17:20.070914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 03:17:20.070919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 03:17:20.070923 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:20.070940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 03:17:20.070958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:17:20.070967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 03:17:20.070971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 03:17:20.070976 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:20.070980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 03:17:20.070985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:17:20.070992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 03:17:20.071001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 03:17:31.457833 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:31.457946 | orchestrator | 2026-02-03 03:17:31.457961 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-03 03:17:31.457973 | orchestrator | Tuesday 03 February 2026 03:17:20 +0000 (0:00:00.960) 0:02:40.391 ****** 2026-02-03 03:17:31.457985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-03 03:17:31.457997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-03 03:17:31.458009 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:31.458127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-03 03:17:31.458147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-03 03:17:31.458165 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:31.458182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-03 03:17:31.458199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-03 03:17:31.458217 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:31.458233 | orchestrator | 2026-02-03 03:17:31.458250 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-03 03:17:31.458266 | orchestrator | Tuesday 03 February 2026 03:17:21 +0000 (0:00:00.909) 0:02:41.301 ****** 2026-02-03 03:17:31.458279 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:17:31.458289 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:17:31.458299 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:17:31.458316 | orchestrator | 2026-02-03 03:17:31.458333 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-03 03:17:31.458349 | orchestrator | Tuesday 03 February 2026 03:17:22 +0000 (0:00:01.345) 0:02:42.646 ****** 2026-02-03 03:17:31.458365 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:17:31.458382 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:17:31.458397 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:17:31.458415 | orchestrator | 2026-02-03 03:17:31.458431 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-03 03:17:31.458449 | orchestrator | Tuesday 03 February 2026 03:17:24 +0000 (0:00:02.189) 0:02:44.835 ****** 2026-02-03 03:17:31.458464 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:17:31.458479 | orchestrator | 2026-02-03 03:17:31.458497 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-03 03:17:31.458512 | orchestrator | Tuesday 03 February 2026 03:17:25 +0000 (0:00:01.388) 0:02:46.223 ****** 2026-02-03 03:17:31.458528 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 03:17:31.458543 | orchestrator | 2026-02-03 03:17:31.458559 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-03 03:17:31.458640 | orchestrator | Tuesday 03 February 2026 03:17:29 +0000 (0:00:03.105) 0:02:49.329 ****** 2026-02-03 03:17:31.458712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:17:31.458741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 03:17:31.458756 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:31.458774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:17:31.458798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 03:17:31.458810 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:31.458833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:17:33.973715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 03:17:33.973789 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:33.973797 | orchestrator | 2026-02-03 03:17:33.973803 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-03 03:17:33.973809 | orchestrator | Tuesday 03 February 2026 03:17:31 +0000 (0:00:02.364) 0:02:51.693 ****** 2026-02-03 03:17:33.973846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:17:33.973853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 03:17:33.973857 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:33.973873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:17:33.973889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 03:17:33.973894 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:33.973898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:17:33.973906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 03:17:43.804995 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:43.805096 | orchestrator | 2026-02-03 03:17:43.805107 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-03 03:17:43.805116 | orchestrator | Tuesday 03 February 2026 03:17:33 +0000 (0:00:02.518) 0:02:54.212 ****** 2026-02-03 03:17:43.805125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 03:17:43.805155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 03:17:43.805175 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:43.805181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 03:17:43.805187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 03:17:43.805193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 03:17:43.805200 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:43.805206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 03:17:43.805212 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:43.805217 | orchestrator | 2026-02-03 03:17:43.805224 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-03 03:17:43.805230 | orchestrator | Tuesday 03 February 2026 03:17:36 +0000 (0:00:02.850) 0:02:57.062 ****** 2026-02-03 03:17:43.805235 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:17:43.805261 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:17:43.805267 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:17:43.805273 | orchestrator | 2026-02-03 03:17:43.805279 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-03 03:17:43.805284 | orchestrator | Tuesday 03 February 2026 03:17:39 +0000 (0:00:02.208) 0:02:59.270 ****** 2026-02-03 03:17:43.805290 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:43.805296 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:43.805301 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:43.805307 | orchestrator | 2026-02-03 03:17:43.805314 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-03 03:17:43.805320 | orchestrator | Tuesday 03 February 2026 03:17:40 +0000 (0:00:01.448) 0:03:00.719 ****** 2026-02-03 03:17:43.805325 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:43.805331 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:43.805336 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:43.805342 | orchestrator | 2026-02-03 03:17:43.805348 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-03 03:17:43.805354 | orchestrator | Tuesday 03 February 2026 03:17:40 +0000 (0:00:00.337) 0:03:01.057 ****** 2026-02-03 03:17:43.805360 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:17:43.805366 | orchestrator | 2026-02-03 03:17:43.805372 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-03 03:17:43.805378 | orchestrator | Tuesday 03 February 2026 03:17:42 +0000 (0:00:01.338) 0:03:02.395 ****** 2026-02-03 03:17:43.805388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-03 03:17:43.805397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-03 03:17:43.805403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-03 03:17:43.805409 | orchestrator | 2026-02-03 03:17:43.805415 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-03 03:17:43.805427 | orchestrator | Tuesday 03 February 2026 03:17:43 +0000 (0:00:01.452) 0:03:03.848 ****** 2026-02-03 03:17:43.805438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-03 03:17:52.146742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-03 03:17:52.146818 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:52.146825 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:52.146830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-03 03:17:52.146835 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:52.146839 | orchestrator | 2026-02-03 03:17:52.146845 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-03 03:17:52.146851 | orchestrator | Tuesday 03 February 2026 03:17:43 +0000 (0:00:00.393) 0:03:04.241 ****** 2026-02-03 03:17:52.146856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-03 03:17:52.146862 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:52.146866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-03 03:17:52.146870 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:52.146874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-03 03:17:52.146892 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:52.146896 | orchestrator | 2026-02-03 03:17:52.146926 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-03 03:17:52.146931 | orchestrator | Tuesday 03 February 2026 03:17:44 +0000 (0:00:00.860) 0:03:05.102 ****** 2026-02-03 03:17:52.146934 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:52.146938 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:52.146942 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:52.146946 | orchestrator | 2026-02-03 03:17:52.146950 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-03 03:17:52.146954 | orchestrator | Tuesday 03 February 2026 03:17:45 +0000 (0:00:00.492) 0:03:05.595 ****** 2026-02-03 03:17:52.146957 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:52.146961 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:52.146965 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:52.146969 | orchestrator | 2026-02-03 03:17:52.146973 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-03 03:17:52.146976 | orchestrator | Tuesday 03 February 2026 03:17:46 +0000 (0:00:01.306) 0:03:06.901 ****** 2026-02-03 03:17:52.146980 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:52.146984 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:52.146988 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:17:52.146991 | orchestrator | 2026-02-03 03:17:52.146995 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-03 03:17:52.146999 | orchestrator | Tuesday 03 February 2026 03:17:46 +0000 (0:00:00.318) 0:03:07.220 ****** 2026-02-03 03:17:52.147003 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:17:52.147007 | orchestrator | 2026-02-03 03:17:52.147010 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-03 03:17:52.147014 | orchestrator | Tuesday 03 February 2026 03:17:48 +0000 (0:00:01.480) 0:03:08.701 ****** 2026-02-03 03:17:52.147030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 03:17:52.147040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.147045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.147054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.147059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-03 03:17:52.147068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.382426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 03:17:52.382504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:52.382530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.382541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:52.382551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.382574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.382590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.382667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:52.382681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-03 03:17:52.382687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.382693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.382704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-03 03:17:52.492131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:52.492259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.492300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 03:17:52.492312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:52.492325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:52.492359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 03:17:52.492372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:52.492389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.492408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.492416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.492423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:52.492437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.755059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.755318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-03 03:17:52.755354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-03 03:17:52.755373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.755394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:52.755420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:52.755480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.755548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:52.755574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 03:17:52.755634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:52.755661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:52.755686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:52.755740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.819347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-03 03:17:53.819469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:53.819487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.819501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 03:17:53.819516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:53.819529 | orchestrator | 2026-02-03 03:17:53.819543 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-03 03:17:53.819583 | orchestrator | Tuesday 03 February 2026 03:17:52 +0000 (0:00:04.292) 0:03:12.993 ****** 2026-02-03 03:17:53.819691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 03:17:53.819710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.819723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.819735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.819747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-03 03:17:53.819784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.907882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:53.907998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:53.908024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.908043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 03:17:53.908060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:53.908146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.908293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.908311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.908327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-03 03:17:53.908340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.908349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:53.908379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-03 03:17:53.908402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.992137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.993254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 03:17:53.993301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:53.993348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:53.993371 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:17:53.993417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 03:17:53.993469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:53.993491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.993512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.993531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.993566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:53.993586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:53.993650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:54.218369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-03 03:17:54.218576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-03 03:17:54.218715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:54.218736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:54.218753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:54.218774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:54.218816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:17:54.218833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 03:17:54.218849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:54.218870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:54.218894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:17:54.218911 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:17:54.218929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 03:17:54.218953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-03 03:18:04.534955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-03 03:18:04.535050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 03:18:04.535083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 03:18:04.535106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 03:18:04.535114 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:18:04.535122 | orchestrator | 2026-02-03 03:18:04.535130 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-03 03:18:04.535139 | orchestrator | Tuesday 03 February 2026 03:17:54 +0000 (0:00:01.463) 0:03:14.457 ****** 2026-02-03 03:18:04.535146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-03 03:18:04.535155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-03 03:18:04.535163 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:18:04.535169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-03 03:18:04.535176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-03 03:18:04.535183 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:18:04.535204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-03 03:18:04.535212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-03 03:18:04.535226 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:18:04.535233 | orchestrator | 2026-02-03 03:18:04.535241 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-03 03:18:04.535249 | orchestrator | Tuesday 03 February 2026 03:17:56 +0000 (0:00:02.031) 0:03:16.489 ****** 2026-02-03 03:18:04.535256 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:18:04.535264 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:18:04.535272 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:18:04.535279 | orchestrator | 2026-02-03 03:18:04.535287 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-03 03:18:04.535294 | orchestrator | Tuesday 03 February 2026 03:17:57 +0000 (0:00:01.348) 0:03:17.837 ****** 2026-02-03 03:18:04.535301 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:18:04.535309 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:18:04.535316 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:18:04.535323 | orchestrator | 2026-02-03 03:18:04.535331 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-03 03:18:04.535338 | orchestrator | Tuesday 03 February 2026 03:17:59 +0000 (0:00:02.116) 0:03:19.954 ****** 2026-02-03 03:18:04.535345 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:18:04.535353 | orchestrator | 2026-02-03 03:18:04.535360 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-03 03:18:04.535368 | orchestrator | Tuesday 03 February 2026 03:18:00 +0000 (0:00:01.250) 0:03:21.204 ****** 2026-02-03 03:18:04.535377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:18:04.535391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:18:04.535399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:18:04.535411 | orchestrator | 2026-02-03 03:18:04.535419 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-03 03:18:04.535433 | orchestrator | Tuesday 03 February 2026 03:18:04 +0000 (0:00:03.558) 0:03:24.763 ****** 2026-02-03 03:18:15.526565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:18:15.526735 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:18:15.526752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:18:15.526760 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:18:15.526781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:18:15.526789 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:18:15.526796 | orchestrator | 2026-02-03 03:18:15.526805 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-03 03:18:15.526813 | orchestrator | Tuesday 03 February 2026 03:18:05 +0000 (0:00:00.533) 0:03:25.296 ****** 2026-02-03 03:18:15.526820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-03 03:18:15.526849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-03 03:18:15.526858 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:18:15.526865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-03 03:18:15.526872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-03 03:18:15.526878 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:18:15.526901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-03 03:18:15.526909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-03 03:18:15.526915 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:18:15.526921 | orchestrator | 2026-02-03 03:18:15.526929 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-03 03:18:15.526936 | orchestrator | Tuesday 03 February 2026 03:18:05 +0000 (0:00:00.772) 0:03:26.069 ****** 2026-02-03 03:18:15.526942 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:18:15.526948 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:18:15.526954 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:18:15.526960 | orchestrator | 2026-02-03 03:18:15.526967 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-03 03:18:15.526972 | orchestrator | Tuesday 03 February 2026 03:18:07 +0000 (0:00:01.930) 0:03:27.999 ****** 2026-02-03 03:18:15.526976 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:18:15.526980 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:18:15.526984 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:18:15.526988 | orchestrator | 2026-02-03 03:18:15.526992 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-03 03:18:15.526996 | orchestrator | Tuesday 03 February 2026 03:18:09 +0000 (0:00:01.887) 0:03:29.887 ****** 2026-02-03 03:18:15.527000 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:18:15.527004 | orchestrator | 2026-02-03 03:18:15.527008 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-03 03:18:15.527012 | orchestrator | Tuesday 03 February 2026 03:18:11 +0000 (0:00:01.634) 0:03:31.522 ****** 2026-02-03 03:18:15.527019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 03:18:15.527035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 03:18:15.527045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 03:18:16.514322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:18:16.514447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:18:16.514487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:18:16.514525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:18:16.514533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:18:16.514541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:18:16.514548 | orchestrator | 2026-02-03 03:18:16.514558 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-03 03:18:16.514567 | orchestrator | Tuesday 03 February 2026 03:18:15 +0000 (0:00:04.242) 0:03:35.765 ****** 2026-02-03 03:18:16.514601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 03:18:16.514651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:18:16.514667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:18:16.514675 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:18:16.514684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 03:18:16.514699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:18:27.780839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:18:27.780989 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:18:27.781033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 03:18:27.781075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 03:18:27.781088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 03:18:27.781100 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:18:27.781112 | orchestrator | 2026-02-03 03:18:27.781130 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-03 03:18:27.781155 | orchestrator | Tuesday 03 February 2026 03:18:16 +0000 (0:00:00.985) 0:03:36.751 ****** 2026-02-03 03:18:27.781185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781293 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:18:27.781313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781395 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:18:27.781409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-03 03:18:27.781470 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:18:27.781484 | orchestrator | 2026-02-03 03:18:27.781497 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-03 03:18:27.781510 | orchestrator | Tuesday 03 February 2026 03:18:17 +0000 (0:00:01.334) 0:03:38.085 ****** 2026-02-03 03:18:27.781523 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:18:27.781536 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:18:27.781548 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:18:27.781561 | orchestrator | 2026-02-03 03:18:27.781575 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-03 03:18:27.781587 | orchestrator | Tuesday 03 February 2026 03:18:19 +0000 (0:00:01.384) 0:03:39.470 ****** 2026-02-03 03:18:27.781598 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:18:27.781608 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:18:27.781619 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:18:27.781678 | orchestrator | 2026-02-03 03:18:27.781691 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-03 03:18:27.781702 | orchestrator | Tuesday 03 February 2026 03:18:21 +0000 (0:00:02.202) 0:03:41.673 ****** 2026-02-03 03:18:27.781712 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:18:27.781723 | orchestrator | 2026-02-03 03:18:27.781734 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-03 03:18:27.781745 | orchestrator | Tuesday 03 February 2026 03:18:23 +0000 (0:00:01.618) 0:03:43.291 ****** 2026-02-03 03:18:27.781756 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-03 03:18:27.781769 | orchestrator | 2026-02-03 03:18:27.781780 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-03 03:18:27.781791 | orchestrator | Tuesday 03 February 2026 03:18:23 +0000 (0:00:00.855) 0:03:44.147 ****** 2026-02-03 03:18:27.781804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-03 03:18:27.781835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-03 03:18:39.682905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-03 03:18:39.683009 | orchestrator | 2026-02-03 03:18:39.683022 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-03 03:18:39.683032 | orchestrator | Tuesday 03 February 2026 03:18:27 +0000 (0:00:03.865) 0:03:48.013 ****** 2026-02-03 03:18:39.683040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 03:18:39.683047 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:18:39.683071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 03:18:39.683078 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:18:39.683084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 03:18:39.683091 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:18:39.683098 | orchestrator | 2026-02-03 03:18:39.683105 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-03 03:18:39.683112 | orchestrator | Tuesday 03 February 2026 03:18:29 +0000 (0:00:01.439) 0:03:49.453 ****** 2026-02-03 03:18:39.683120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 03:18:39.683130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 03:18:39.683158 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:18:39.683166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 03:18:39.683172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 03:18:39.683179 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:18:39.683185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 03:18:39.683192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 03:18:39.683230 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:18:39.683238 | orchestrator | 2026-02-03 03:18:39.683245 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-03 03:18:39.683251 | orchestrator | Tuesday 03 February 2026 03:18:30 +0000 (0:00:01.453) 0:03:50.906 ****** 2026-02-03 03:18:39.683258 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:18:39.683265 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:18:39.683271 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:18:39.683278 | orchestrator | 2026-02-03 03:18:39.683284 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-03 03:18:39.683290 | orchestrator | Tuesday 03 February 2026 03:18:33 +0000 (0:00:02.436) 0:03:53.343 ****** 2026-02-03 03:18:39.683296 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:18:39.683302 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:18:39.683308 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:18:39.683315 | orchestrator | 2026-02-03 03:18:39.683322 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-03 03:18:39.683327 | orchestrator | Tuesday 03 February 2026 03:18:36 +0000 (0:00:03.115) 0:03:56.459 ****** 2026-02-03 03:18:39.683335 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-03 03:18:39.683342 | orchestrator | 2026-02-03 03:18:39.683348 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-03 03:18:39.683354 | orchestrator | Tuesday 03 February 2026 03:18:37 +0000 (0:00:01.173) 0:03:57.633 ****** 2026-02-03 03:18:39.683368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 03:18:39.683375 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:18:39.683382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 03:18:39.683395 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:18:39.683402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 03:18:39.683409 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:18:39.683416 | orchestrator | 2026-02-03 03:18:39.683423 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-03 03:18:39.683430 | orchestrator | Tuesday 03 February 2026 03:18:38 +0000 (0:00:01.004) 0:03:58.638 ****** 2026-02-03 03:18:39.683436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 03:18:39.683442 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:18:39.683447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 03:18:39.683460 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:03.278539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 03:19:03.278654 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:03.278705 | orchestrator | 2026-02-03 03:19:03.278717 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-03 03:19:03.278728 | orchestrator | Tuesday 03 February 2026 03:18:39 +0000 (0:00:01.278) 0:03:59.916 ****** 2026-02-03 03:19:03.278738 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:03.278747 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:03.278755 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:03.278764 | orchestrator | 2026-02-03 03:19:03.278773 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-03 03:19:03.278782 | orchestrator | Tuesday 03 February 2026 03:18:41 +0000 (0:00:01.622) 0:04:01.539 ****** 2026-02-03 03:19:03.278791 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:19:03.278800 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:19:03.278809 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:19:03.278817 | orchestrator | 2026-02-03 03:19:03.278826 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-03 03:19:03.278835 | orchestrator | Tuesday 03 February 2026 03:18:44 +0000 (0:00:02.810) 0:04:04.350 ****** 2026-02-03 03:19:03.278867 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:19:03.278876 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:19:03.278884 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:19:03.278893 | orchestrator | 2026-02-03 03:19:03.278917 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-03 03:19:03.278926 | orchestrator | Tuesday 03 February 2026 03:18:46 +0000 (0:00:02.754) 0:04:07.105 ****** 2026-02-03 03:19:03.278935 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-03 03:19:03.278945 | orchestrator | 2026-02-03 03:19:03.278954 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-03 03:19:03.278963 | orchestrator | Tuesday 03 February 2026 03:18:48 +0000 (0:00:01.167) 0:04:08.272 ****** 2026-02-03 03:19:03.278972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 03:19:03.278981 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:03.278991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 03:19:03.279000 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:03.279009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 03:19:03.279018 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:03.279027 | orchestrator | 2026-02-03 03:19:03.279036 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-03 03:19:03.279046 | orchestrator | Tuesday 03 February 2026 03:18:49 +0000 (0:00:01.290) 0:04:09.563 ****** 2026-02-03 03:19:03.279074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 03:19:03.279085 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:03.279096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 03:19:03.279112 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:03.279123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 03:19:03.279132 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:03.279142 | orchestrator | 2026-02-03 03:19:03.279157 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-03 03:19:03.279166 | orchestrator | Tuesday 03 February 2026 03:18:50 +0000 (0:00:01.356) 0:04:10.919 ****** 2026-02-03 03:19:03.279175 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:03.279184 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:03.279192 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:03.279201 | orchestrator | 2026-02-03 03:19:03.279210 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-03 03:19:03.279219 | orchestrator | Tuesday 03 February 2026 03:18:52 +0000 (0:00:01.885) 0:04:12.805 ****** 2026-02-03 03:19:03.279229 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:19:03.279237 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:19:03.279246 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:19:03.279255 | orchestrator | 2026-02-03 03:19:03.279264 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-03 03:19:03.279273 | orchestrator | Tuesday 03 February 2026 03:18:54 +0000 (0:00:02.355) 0:04:15.161 ****** 2026-02-03 03:19:03.279282 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:19:03.279291 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:19:03.279300 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:19:03.279308 | orchestrator | 2026-02-03 03:19:03.279317 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-03 03:19:03.279326 | orchestrator | Tuesday 03 February 2026 03:18:58 +0000 (0:00:03.323) 0:04:18.484 ****** 2026-02-03 03:19:03.279334 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:19:03.279343 | orchestrator | 2026-02-03 03:19:03.279352 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-03 03:19:03.279361 | orchestrator | Tuesday 03 February 2026 03:18:59 +0000 (0:00:01.352) 0:04:19.836 ****** 2026-02-03 03:19:03.279372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 03:19:03.279383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 03:19:03.279408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.045300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.045402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:19:04.045416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 03:19:04.045425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 03:19:04.045434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.045459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.045480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:19:04.045488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 03:19:04.045495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 03:19:04.045501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.045532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.045546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:19:04.045553 | orchestrator | 2026-02-03 03:19:04.045561 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-03 03:19:04.045568 | orchestrator | Tuesday 03 February 2026 03:19:03 +0000 (0:00:03.821) 0:04:23.658 ****** 2026-02-03 03:19:04.045581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 03:19:04.196577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 03:19:04.196656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.196677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.196683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 03:19:04.196702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:19:04.196708 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:04.196714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 03:19:04.196734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.196738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.196742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:19:04.196751 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:04.196755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 03:19:04.196759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 03:19:04.196763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 03:19:04.196775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 03:19:16.862634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 03:19:16.862881 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:16.862917 | orchestrator | 2026-02-03 03:19:16.862938 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-03 03:19:16.862957 | orchestrator | Tuesday 03 February 2026 03:19:04 +0000 (0:00:00.778) 0:04:24.437 ****** 2026-02-03 03:19:16.862974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 03:19:16.863026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 03:19:16.863047 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:16.863063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 03:19:16.863080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 03:19:16.863096 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:16.863113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 03:19:16.863128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 03:19:16.863146 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:16.863164 | orchestrator | 2026-02-03 03:19:16.863181 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-03 03:19:16.863199 | orchestrator | Tuesday 03 February 2026 03:19:05 +0000 (0:00:00.970) 0:04:25.407 ****** 2026-02-03 03:19:16.863217 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:19:16.863233 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:19:16.863249 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:19:16.863265 | orchestrator | 2026-02-03 03:19:16.863281 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-03 03:19:16.863298 | orchestrator | Tuesday 03 February 2026 03:19:06 +0000 (0:00:01.818) 0:04:27.225 ****** 2026-02-03 03:19:16.863316 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:19:16.863333 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:19:16.863350 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:19:16.863367 | orchestrator | 2026-02-03 03:19:16.863385 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-03 03:19:16.863401 | orchestrator | Tuesday 03 February 2026 03:19:09 +0000 (0:00:02.190) 0:04:29.416 ****** 2026-02-03 03:19:16.863419 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:19:16.863438 | orchestrator | 2026-02-03 03:19:16.863454 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-03 03:19:16.863470 | orchestrator | Tuesday 03 February 2026 03:19:10 +0000 (0:00:01.433) 0:04:30.850 ****** 2026-02-03 03:19:16.863507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:19:16.863556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:19:16.863590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:19:16.863610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:19:16.863638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:19:16.863669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:19:18.906396 | orchestrator | 2026-02-03 03:19:18.906546 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-03 03:19:18.906566 | orchestrator | Tuesday 03 February 2026 03:19:16 +0000 (0:00:06.240) 0:04:37.090 ****** 2026-02-03 03:19:18.906580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-03 03:19:18.906598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-03 03:19:18.906612 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:18.906733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-03 03:19:18.906753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-03 03:19:18.906811 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:18.906825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-03 03:19:18.906837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-03 03:19:18.906849 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:18.906861 | orchestrator | 2026-02-03 03:19:18.906872 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-03 03:19:18.906884 | orchestrator | Tuesday 03 February 2026 03:19:17 +0000 (0:00:01.086) 0:04:38.176 ****** 2026-02-03 03:19:18.906896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-03 03:19:18.906909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-03 03:19:18.906926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-03 03:19:18.906951 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:18.906979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-03 03:19:18.906999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-03 03:19:18.907018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-03 03:19:18.907036 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:18.907055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-03 03:19:18.907073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-03 03:19:18.907112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-03 03:19:25.402675 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:25.402816 | orchestrator | 2026-02-03 03:19:25.402842 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-03 03:19:25.402853 | orchestrator | Tuesday 03 February 2026 03:19:18 +0000 (0:00:00.960) 0:04:39.137 ****** 2026-02-03 03:19:25.402862 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:25.402871 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:25.402879 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:25.402887 | orchestrator | 2026-02-03 03:19:25.402896 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-03 03:19:25.402905 | orchestrator | Tuesday 03 February 2026 03:19:19 +0000 (0:00:00.470) 0:04:39.608 ****** 2026-02-03 03:19:25.402913 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:25.402921 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:25.402930 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:25.402938 | orchestrator | 2026-02-03 03:19:25.402946 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-03 03:19:25.402955 | orchestrator | Tuesday 03 February 2026 03:19:21 +0000 (0:00:01.794) 0:04:41.402 ****** 2026-02-03 03:19:25.402963 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:19:25.402972 | orchestrator | 2026-02-03 03:19:25.402981 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-03 03:19:25.402991 | orchestrator | Tuesday 03 February 2026 03:19:22 +0000 (0:00:01.760) 0:04:43.162 ****** 2026-02-03 03:19:25.403004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-03 03:19:25.403042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 03:19:25.403067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:25.403077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:25.403087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 03:19:25.403115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-03 03:19:25.403124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 03:19:25.403134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:25.403150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:25.403160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 03:19:25.403173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-03 03:19:25.403180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 03:19:25.403198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.030140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.030213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 03:19:27.030238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-03 03:19:27.030255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-03 03:19:27.030261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-03 03:19:27.030282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-03 03:19:27.030292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.030307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.030316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.030323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.030330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 03:19:27.030337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 03:19:27.030349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-03 03:19:27.754000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-03 03:19:27.754116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.754140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.754146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 03:19:27.754159 | orchestrator | 2026-02-03 03:19:27.754166 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-03 03:19:27.754172 | orchestrator | Tuesday 03 February 2026 03:19:27 +0000 (0:00:04.251) 0:04:47.414 ****** 2026-02-03 03:19:27.754178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-03 03:19:27.754185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 03:19:27.754218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.754224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.754230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 03:19:27.754241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-03 03:19:27.754247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-03 03:19:27.754252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.754268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.863430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-03 03:19:27.863521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 03:19:27.863552 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:27.863565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 03:19:27.863575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.863585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.863595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-03 03:19:27.863641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 03:19:27.863653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 03:19:27.863668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-03 03:19:27.863728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.863741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-03 03:19:27.863759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:27.863775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:29.847072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:29.847169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 03:19:29.847198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 03:19:29.847208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-03 03:19:29.847219 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:29.847231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-03 03:19:29.847269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:29.847295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 03:19:29.847303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 03:19:29.847311 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:29.847319 | orchestrator | 2026-02-03 03:19:29.847329 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-03 03:19:29.847338 | orchestrator | Tuesday 03 February 2026 03:19:27 +0000 (0:00:00.829) 0:04:48.243 ****** 2026-02-03 03:19:29.847350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-03 03:19:29.847360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-03 03:19:29.847370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-03 03:19:29.847379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-03 03:19:29.847387 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:29.847396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-03 03:19:29.847409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-03 03:19:29.847418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-03 03:19:29.847425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-03 03:19:29.847433 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:29.847440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-03 03:19:29.847447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-03 03:19:29.847455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-03 03:19:29.847468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-03 03:19:37.770475 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:37.770612 | orchestrator | 2026-02-03 03:19:37.770635 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-03 03:19:37.770646 | orchestrator | Tuesday 03 February 2026 03:19:29 +0000 (0:00:01.829) 0:04:50.073 ****** 2026-02-03 03:19:37.770657 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:37.770666 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:37.770677 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:37.770687 | orchestrator | 2026-02-03 03:19:37.770721 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-03 03:19:37.770731 | orchestrator | Tuesday 03 February 2026 03:19:30 +0000 (0:00:00.455) 0:04:50.528 ****** 2026-02-03 03:19:37.770741 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:37.770751 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:37.770761 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:37.770770 | orchestrator | 2026-02-03 03:19:37.770781 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-03 03:19:37.770791 | orchestrator | Tuesday 03 February 2026 03:19:31 +0000 (0:00:01.452) 0:04:51.980 ****** 2026-02-03 03:19:37.770801 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:19:37.770811 | orchestrator | 2026-02-03 03:19:37.770820 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-03 03:19:37.770830 | orchestrator | Tuesday 03 February 2026 03:19:33 +0000 (0:00:01.854) 0:04:53.835 ****** 2026-02-03 03:19:37.770844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:19:37.770886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:19:37.770938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:19:37.770951 | orchestrator | 2026-02-03 03:19:37.770961 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-03 03:19:37.770990 | orchestrator | Tuesday 03 February 2026 03:19:35 +0000 (0:00:02.243) 0:04:56.078 ****** 2026-02-03 03:19:37.771002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 03:19:37.771028 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:37.771041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 03:19:37.771054 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:37.771065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 03:19:37.771078 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:37.771089 | orchestrator | 2026-02-03 03:19:37.771101 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-03 03:19:37.771112 | orchestrator | Tuesday 03 February 2026 03:19:36 +0000 (0:00:00.423) 0:04:56.501 ****** 2026-02-03 03:19:37.771125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-03 03:19:37.771137 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:37.771150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-03 03:19:37.771161 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:37.771172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-03 03:19:37.771183 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:37.771194 | orchestrator | 2026-02-03 03:19:37.771206 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-03 03:19:37.771217 | orchestrator | Tuesday 03 February 2026 03:19:36 +0000 (0:00:00.660) 0:04:57.161 ****** 2026-02-03 03:19:37.771235 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:47.807650 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:47.807824 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:47.807841 | orchestrator | 2026-02-03 03:19:47.807854 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-03 03:19:47.807866 | orchestrator | Tuesday 03 February 2026 03:19:37 +0000 (0:00:00.850) 0:04:58.012 ****** 2026-02-03 03:19:47.807875 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:47.807910 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:47.807921 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:47.807930 | orchestrator | 2026-02-03 03:19:47.807940 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-03 03:19:47.807950 | orchestrator | Tuesday 03 February 2026 03:19:39 +0000 (0:00:01.412) 0:04:59.425 ****** 2026-02-03 03:19:47.807960 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:19:47.807970 | orchestrator | 2026-02-03 03:19:47.807981 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-03 03:19:47.807991 | orchestrator | Tuesday 03 February 2026 03:19:40 +0000 (0:00:01.501) 0:05:00.926 ****** 2026-02-03 03:19:47.808019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 03:19:47.808044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 03:19:47.808061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 03:19:47.808103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 03:19:47.808146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 03:19:47.808165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 03:19:47.808184 | orchestrator | 2026-02-03 03:19:47.808196 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-03 03:19:47.808209 | orchestrator | Tuesday 03 February 2026 03:19:46 +0000 (0:00:06.042) 0:05:06.969 ****** 2026-02-03 03:19:47.808222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-03 03:19:47.808244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-03 03:19:53.666242 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:53.666379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-03 03:19:53.666402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-03 03:19:53.666416 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:53.666428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-03 03:19:53.666441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-03 03:19:53.666477 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:53.666489 | orchestrator | 2026-02-03 03:19:53.666503 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-03 03:19:53.666515 | orchestrator | Tuesday 03 February 2026 03:19:47 +0000 (0:00:01.079) 0:05:08.048 ****** 2026-02-03 03:19:53.666545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666601 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:53.666613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666657 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:53.666668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-03 03:19:53.666739 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:53.666750 | orchestrator | 2026-02-03 03:19:53.666771 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-03 03:19:53.666784 | orchestrator | Tuesday 03 February 2026 03:19:48 +0000 (0:00:00.946) 0:05:08.994 ****** 2026-02-03 03:19:53.666797 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:19:53.666809 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:19:53.666823 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:19:53.666835 | orchestrator | 2026-02-03 03:19:53.666847 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-03 03:19:53.666860 | orchestrator | Tuesday 03 February 2026 03:19:50 +0000 (0:00:01.314) 0:05:10.309 ****** 2026-02-03 03:19:53.666872 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:19:53.666884 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:19:53.666897 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:19:53.666909 | orchestrator | 2026-02-03 03:19:53.666923 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-03 03:19:53.666935 | orchestrator | Tuesday 03 February 2026 03:19:52 +0000 (0:00:02.264) 0:05:12.573 ****** 2026-02-03 03:19:53.666948 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:53.666961 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:53.666973 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:53.666985 | orchestrator | 2026-02-03 03:19:53.666998 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-03 03:19:53.667011 | orchestrator | Tuesday 03 February 2026 03:19:52 +0000 (0:00:00.659) 0:05:13.232 ****** 2026-02-03 03:19:53.667023 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:53.667036 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:19:53.667072 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:19:53.667086 | orchestrator | 2026-02-03 03:19:53.667098 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-03 03:19:53.667111 | orchestrator | Tuesday 03 February 2026 03:19:53 +0000 (0:00:00.349) 0:05:13.582 ****** 2026-02-03 03:19:53.667124 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:19:53.667142 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.428844 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.428956 | orchestrator | 2026-02-03 03:20:39.428974 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-03 03:20:39.428987 | orchestrator | Tuesday 03 February 2026 03:19:53 +0000 (0:00:00.327) 0:05:13.909 ****** 2026-02-03 03:20:39.428995 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:39.429003 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.429011 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.429020 | orchestrator | 2026-02-03 03:20:39.429030 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-03 03:20:39.429039 | orchestrator | Tuesday 03 February 2026 03:19:53 +0000 (0:00:00.313) 0:05:14.223 ****** 2026-02-03 03:20:39.429049 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:39.429058 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.429067 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.429076 | orchestrator | 2026-02-03 03:20:39.429085 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-03 03:20:39.429111 | orchestrator | Tuesday 03 February 2026 03:19:54 +0000 (0:00:00.658) 0:05:14.881 ****** 2026-02-03 03:20:39.429121 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:39.429130 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.429139 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.429148 | orchestrator | 2026-02-03 03:20:39.429157 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-03 03:20:39.429166 | orchestrator | Tuesday 03 February 2026 03:19:55 +0000 (0:00:00.533) 0:05:15.414 ****** 2026-02-03 03:20:39.429175 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:39.429184 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:39.429193 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:39.429202 | orchestrator | 2026-02-03 03:20:39.429211 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-03 03:20:39.429241 | orchestrator | Tuesday 03 February 2026 03:19:55 +0000 (0:00:00.724) 0:05:16.139 ****** 2026-02-03 03:20:39.429251 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:39.429260 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:39.429269 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:39.429277 | orchestrator | 2026-02-03 03:20:39.429287 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-03 03:20:39.429296 | orchestrator | Tuesday 03 February 2026 03:19:56 +0000 (0:00:00.354) 0:05:16.493 ****** 2026-02-03 03:20:39.429305 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:39.429312 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:39.429321 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:39.429328 | orchestrator | 2026-02-03 03:20:39.429337 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-03 03:20:39.429345 | orchestrator | Tuesday 03 February 2026 03:19:57 +0000 (0:00:01.279) 0:05:17.773 ****** 2026-02-03 03:20:39.429355 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:39.429363 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:39.429371 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:39.429381 | orchestrator | 2026-02-03 03:20:39.429390 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-03 03:20:39.429400 | orchestrator | Tuesday 03 February 2026 03:19:58 +0000 (0:00:00.896) 0:05:18.669 ****** 2026-02-03 03:20:39.429409 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:39.429417 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:39.429426 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:39.429436 | orchestrator | 2026-02-03 03:20:39.429445 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-03 03:20:39.429453 | orchestrator | Tuesday 03 February 2026 03:19:59 +0000 (0:00:00.884) 0:05:19.553 ****** 2026-02-03 03:20:39.429462 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:20:39.429471 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:20:39.429480 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:20:39.429489 | orchestrator | 2026-02-03 03:20:39.429497 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-03 03:20:39.429506 | orchestrator | Tuesday 03 February 2026 03:20:07 +0000 (0:00:08.240) 0:05:27.794 ****** 2026-02-03 03:20:39.429515 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:39.429523 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:39.429532 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:39.429540 | orchestrator | 2026-02-03 03:20:39.429549 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-03 03:20:39.429558 | orchestrator | Tuesday 03 February 2026 03:20:08 +0000 (0:00:01.170) 0:05:28.964 ****** 2026-02-03 03:20:39.429567 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:20:39.429575 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:20:39.429584 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:20:39.429593 | orchestrator | 2026-02-03 03:20:39.429603 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-03 03:20:39.429611 | orchestrator | Tuesday 03 February 2026 03:20:23 +0000 (0:00:15.206) 0:05:44.171 ****** 2026-02-03 03:20:39.429620 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:39.429630 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:39.429639 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:39.429647 | orchestrator | 2026-02-03 03:20:39.429656 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-03 03:20:39.429665 | orchestrator | Tuesday 03 February 2026 03:20:24 +0000 (0:00:00.725) 0:05:44.896 ****** 2026-02-03 03:20:39.429674 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:20:39.429682 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:20:39.429691 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:20:39.429700 | orchestrator | 2026-02-03 03:20:39.429709 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-03 03:20:39.429717 | orchestrator | Tuesday 03 February 2026 03:20:33 +0000 (0:00:09.101) 0:05:53.998 ****** 2026-02-03 03:20:39.429738 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:39.429802 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.429811 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.429819 | orchestrator | 2026-02-03 03:20:39.429828 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-03 03:20:39.429837 | orchestrator | Tuesday 03 February 2026 03:20:34 +0000 (0:00:00.721) 0:05:54.720 ****** 2026-02-03 03:20:39.429846 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:39.429854 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.429863 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.429870 | orchestrator | 2026-02-03 03:20:39.429899 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-03 03:20:39.429909 | orchestrator | Tuesday 03 February 2026 03:20:34 +0000 (0:00:00.435) 0:05:55.155 ****** 2026-02-03 03:20:39.429917 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:39.429926 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.429934 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.429943 | orchestrator | 2026-02-03 03:20:39.429951 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-03 03:20:39.429959 | orchestrator | Tuesday 03 February 2026 03:20:35 +0000 (0:00:00.388) 0:05:55.543 ****** 2026-02-03 03:20:39.429968 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:39.429977 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.429984 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.429992 | orchestrator | 2026-02-03 03:20:39.430000 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-03 03:20:39.430009 | orchestrator | Tuesday 03 February 2026 03:20:35 +0000 (0:00:00.382) 0:05:55.926 ****** 2026-02-03 03:20:39.430082 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:39.430100 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.430108 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.430116 | orchestrator | 2026-02-03 03:20:39.430124 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-03 03:20:39.430132 | orchestrator | Tuesday 03 February 2026 03:20:36 +0000 (0:00:00.677) 0:05:56.603 ****** 2026-02-03 03:20:39.430139 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:39.430147 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:39.430155 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:39.430162 | orchestrator | 2026-02-03 03:20:39.430170 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-03 03:20:39.430177 | orchestrator | Tuesday 03 February 2026 03:20:36 +0000 (0:00:00.357) 0:05:56.961 ****** 2026-02-03 03:20:39.430185 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:39.430195 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:39.430203 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:39.430211 | orchestrator | 2026-02-03 03:20:39.430219 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-03 03:20:39.430228 | orchestrator | Tuesday 03 February 2026 03:20:37 +0000 (0:00:00.954) 0:05:57.916 ****** 2026-02-03 03:20:39.430237 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:39.430245 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:39.430253 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:39.430262 | orchestrator | 2026-02-03 03:20:39.430270 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:20:39.430280 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-03 03:20:39.430292 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-03 03:20:39.430301 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-03 03:20:39.430310 | orchestrator | 2026-02-03 03:20:39.430331 | orchestrator | 2026-02-03 03:20:39.430340 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:20:39.430349 | orchestrator | Tuesday 03 February 2026 03:20:38 +0000 (0:00:00.854) 0:05:58.771 ****** 2026-02-03 03:20:39.430358 | orchestrator | =============================================================================== 2026-02-03 03:20:39.430367 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.21s 2026-02-03 03:20:39.430376 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.10s 2026-02-03 03:20:39.430385 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.24s 2026-02-03 03:20:39.430394 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.24s 2026-02-03 03:20:39.430404 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.04s 2026-02-03 03:20:39.430413 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.29s 2026-02-03 03:20:39.430422 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.25s 2026-02-03 03:20:39.430431 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.24s 2026-02-03 03:20:39.430440 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.12s 2026-02-03 03:20:39.430449 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.87s 2026-02-03 03:20:39.430458 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.82s 2026-02-03 03:20:39.430467 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.59s 2026-02-03 03:20:39.430476 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.56s 2026-02-03 03:20:39.430485 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.48s 2026-02-03 03:20:39.430494 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.48s 2026-02-03 03:20:39.430502 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.47s 2026-02-03 03:20:39.430510 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.46s 2026-02-03 03:20:39.430519 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.42s 2026-02-03 03:20:39.430528 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.36s 2026-02-03 03:20:39.430537 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.32s 2026-02-03 03:20:41.801826 | orchestrator | 2026-02-03 03:20:41 | INFO  | Task 3028971d-2d70-4abd-9858-941367ec7f14 (opensearch) was prepared for execution. 2026-02-03 03:20:41.801953 | orchestrator | 2026-02-03 03:20:41 | INFO  | It takes a moment until task 3028971d-2d70-4abd-9858-941367ec7f14 (opensearch) has been started and output is visible here. 2026-02-03 03:20:52.870165 | orchestrator | 2026-02-03 03:20:52.870287 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:20:52.870306 | orchestrator | 2026-02-03 03:20:52.870317 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:20:52.870329 | orchestrator | Tuesday 03 February 2026 03:20:46 +0000 (0:00:00.285) 0:00:00.285 ****** 2026-02-03 03:20:52.870340 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:20:52.870354 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:20:52.870365 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:20:52.870377 | orchestrator | 2026-02-03 03:20:52.870388 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:20:52.870399 | orchestrator | Tuesday 03 February 2026 03:20:46 +0000 (0:00:00.305) 0:00:00.590 ****** 2026-02-03 03:20:52.870459 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-03 03:20:52.870473 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-03 03:20:52.870484 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-03 03:20:52.870496 | orchestrator | 2026-02-03 03:20:52.870508 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-03 03:20:52.870556 | orchestrator | 2026-02-03 03:20:52.870569 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-03 03:20:52.870581 | orchestrator | Tuesday 03 February 2026 03:20:46 +0000 (0:00:00.448) 0:00:01.038 ****** 2026-02-03 03:20:52.870595 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:20:52.870607 | orchestrator | 2026-02-03 03:20:52.870619 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-03 03:20:52.870630 | orchestrator | Tuesday 03 February 2026 03:20:47 +0000 (0:00:00.540) 0:00:01.578 ****** 2026-02-03 03:20:52.870642 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-03 03:20:52.870654 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-03 03:20:52.870667 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-03 03:20:52.870680 | orchestrator | 2026-02-03 03:20:52.870691 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-03 03:20:52.870703 | orchestrator | Tuesday 03 February 2026 03:20:48 +0000 (0:00:00.701) 0:00:02.280 ****** 2026-02-03 03:20:52.870718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:20:52.870735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:20:52.870801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:20:52.870827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:20:52.870855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:20:52.870868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:20:52.870877 | orchestrator | 2026-02-03 03:20:52.870885 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-03 03:20:52.870892 | orchestrator | Tuesday 03 February 2026 03:20:49 +0000 (0:00:01.689) 0:00:03.970 ****** 2026-02-03 03:20:52.870900 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:20:52.870907 | orchestrator | 2026-02-03 03:20:52.870915 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-03 03:20:52.870922 | orchestrator | Tuesday 03 February 2026 03:20:50 +0000 (0:00:00.542) 0:00:04.512 ****** 2026-02-03 03:20:52.870944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:20:53.731352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:20:53.731457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:20:53.731476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:20:53.731508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:20:53.731596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:20:53.731613 | orchestrator | 2026-02-03 03:20:53.731627 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-03 03:20:53.731639 | orchestrator | Tuesday 03 February 2026 03:20:52 +0000 (0:00:02.406) 0:00:06.918 ****** 2026-02-03 03:20:53.731652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-03 03:20:53.731665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-03 03:20:53.731677 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:53.731690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-03 03:20:53.731726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-03 03:20:54.769115 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:54.769224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-03 03:20:54.769253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-03 03:20:54.769275 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:54.769293 | orchestrator | 2026-02-03 03:20:54.769312 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-03 03:20:54.769331 | orchestrator | Tuesday 03 February 2026 03:20:53 +0000 (0:00:00.859) 0:00:07.778 ****** 2026-02-03 03:20:54.769378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-03 03:20:54.769417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-03 03:20:54.769459 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:20:54.769478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-03 03:20:54.769497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-03 03:20:54.769515 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:20:54.769652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-03 03:20:54.769685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-03 03:20:54.769706 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:20:54.769724 | orchestrator | 2026-02-03 03:20:54.769743 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-03 03:20:54.769799 | orchestrator | Tuesday 03 February 2026 03:20:54 +0000 (0:00:01.032) 0:00:08.811 ****** 2026-02-03 03:21:02.889624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:21:02.889851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:21:02.889879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:21:02.889944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:21:02.889985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:21:02.890000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:21:02.890077 | orchestrator | 2026-02-03 03:21:02.890093 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-03 03:21:02.890106 | orchestrator | Tuesday 03 February 2026 03:20:57 +0000 (0:00:02.354) 0:00:11.165 ****** 2026-02-03 03:21:02.890118 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:21:02.890133 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:21:02.890146 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:21:02.890160 | orchestrator | 2026-02-03 03:21:02.890174 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-03 03:21:02.890192 | orchestrator | Tuesday 03 February 2026 03:20:59 +0000 (0:00:02.361) 0:00:13.527 ****** 2026-02-03 03:21:02.890208 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:21:02.890221 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:21:02.890234 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:21:02.890248 | orchestrator | 2026-02-03 03:21:02.890262 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-03 03:21:02.890275 | orchestrator | Tuesday 03 February 2026 03:21:01 +0000 (0:00:01.759) 0:00:15.286 ****** 2026-02-03 03:21:02.890302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:21:02.890325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:21:02.890349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-03 03:23:41.900266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:23:41.900414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:23:41.900453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-03 03:23:41.900463 | orchestrator | 2026-02-03 03:23:41.900472 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-03 03:23:41.900480 | orchestrator | Tuesday 03 February 2026 03:21:02 +0000 (0:00:01.650) 0:00:16.937 ****** 2026-02-03 03:23:41.900487 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:23:41.900497 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:23:41.900509 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:23:41.900520 | orchestrator | 2026-02-03 03:23:41.900533 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-03 03:23:41.900540 | orchestrator | Tuesday 03 February 2026 03:21:03 +0000 (0:00:00.327) 0:00:17.265 ****** 2026-02-03 03:23:41.900547 | orchestrator | 2026-02-03 03:23:41.900554 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-03 03:23:41.900560 | orchestrator | Tuesday 03 February 2026 03:21:03 +0000 (0:00:00.067) 0:00:17.332 ****** 2026-02-03 03:23:41.900567 | orchestrator | 2026-02-03 03:23:41.900573 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-03 03:23:41.900587 | orchestrator | Tuesday 03 February 2026 03:21:03 +0000 (0:00:00.065) 0:00:17.398 ****** 2026-02-03 03:23:41.900593 | orchestrator | 2026-02-03 03:23:41.900600 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-03 03:23:41.900622 | orchestrator | Tuesday 03 February 2026 03:21:03 +0000 (0:00:00.072) 0:00:17.470 ****** 2026-02-03 03:23:41.900629 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:23:41.900636 | orchestrator | 2026-02-03 03:23:41.900643 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-03 03:23:41.900649 | orchestrator | Tuesday 03 February 2026 03:21:03 +0000 (0:00:00.213) 0:00:17.684 ****** 2026-02-03 03:23:41.900656 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:23:41.900663 | orchestrator | 2026-02-03 03:23:41.900670 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-03 03:23:41.900676 | orchestrator | Tuesday 03 February 2026 03:21:04 +0000 (0:00:00.674) 0:00:18.359 ****** 2026-02-03 03:23:41.900683 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:23:41.900690 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:23:41.900697 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:23:41.900703 | orchestrator | 2026-02-03 03:23:41.900710 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-03 03:23:41.900717 | orchestrator | Tuesday 03 February 2026 03:22:09 +0000 (0:01:05.201) 0:01:23.560 ****** 2026-02-03 03:23:41.900724 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:23:41.900730 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:23:41.900737 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:23:41.900743 | orchestrator | 2026-02-03 03:23:41.900750 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-03 03:23:41.900757 | orchestrator | Tuesday 03 February 2026 03:23:30 +0000 (0:01:21.405) 0:02:44.965 ****** 2026-02-03 03:23:41.900765 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:23:41.900772 | orchestrator | 2026-02-03 03:23:41.900783 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-03 03:23:41.900794 | orchestrator | Tuesday 03 February 2026 03:23:31 +0000 (0:00:00.568) 0:02:45.533 ****** 2026-02-03 03:23:41.900805 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:23:41.900814 | orchestrator | 2026-02-03 03:23:41.900821 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-03 03:23:41.900829 | orchestrator | Tuesday 03 February 2026 03:23:34 +0000 (0:00:02.692) 0:02:48.226 ****** 2026-02-03 03:23:41.900836 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:23:41.900844 | orchestrator | 2026-02-03 03:23:41.900851 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-03 03:23:41.900859 | orchestrator | Tuesday 03 February 2026 03:23:36 +0000 (0:00:02.274) 0:02:50.501 ****** 2026-02-03 03:23:41.900867 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:23:41.900875 | orchestrator | 2026-02-03 03:23:41.900883 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-03 03:23:41.900891 | orchestrator | Tuesday 03 February 2026 03:23:39 +0000 (0:00:02.837) 0:02:53.338 ****** 2026-02-03 03:23:41.900927 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:23:41.900935 | orchestrator | 2026-02-03 03:23:41.900943 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:23:41.900952 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 03:23:41.900961 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 03:23:41.900974 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 03:23:41.900982 | orchestrator | 2026-02-03 03:23:41.900990 | orchestrator | 2026-02-03 03:23:41.901003 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:23:41.901011 | orchestrator | Tuesday 03 February 2026 03:23:41 +0000 (0:00:02.595) 0:02:55.934 ****** 2026-02-03 03:23:41.901019 | orchestrator | =============================================================================== 2026-02-03 03:23:41.901027 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 81.41s 2026-02-03 03:23:41.901034 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.20s 2026-02-03 03:23:41.901042 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.84s 2026-02-03 03:23:41.901049 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.69s 2026-02-03 03:23:41.901057 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.60s 2026-02-03 03:23:41.901065 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.41s 2026-02-03 03:23:41.901072 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.36s 2026-02-03 03:23:41.901080 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.35s 2026-02-03 03:23:41.901087 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.27s 2026-02-03 03:23:41.901095 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.76s 2026-02-03 03:23:41.901103 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.69s 2026-02-03 03:23:41.901111 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.65s 2026-02-03 03:23:41.901119 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.03s 2026-02-03 03:23:41.901126 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.86s 2026-02-03 03:23:41.901134 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2026-02-03 03:23:41.901142 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.67s 2026-02-03 03:23:41.901154 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-02-03 03:23:42.290960 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-02-03 03:23:42.291067 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-02-03 03:23:42.291086 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-02-03 03:23:44.726967 | orchestrator | 2026-02-03 03:23:44 | INFO  | Task e34366e0-d1a6-4b57-941f-8f8fbd60715a (memcached) was prepared for execution. 2026-02-03 03:23:44.727080 | orchestrator | 2026-02-03 03:23:44 | INFO  | It takes a moment until task e34366e0-d1a6-4b57-941f-8f8fbd60715a (memcached) has been started and output is visible here. 2026-02-03 03:23:57.072085 | orchestrator | 2026-02-03 03:23:57.072192 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:23:57.072211 | orchestrator | 2026-02-03 03:23:57.072224 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:23:57.072237 | orchestrator | Tuesday 03 February 2026 03:23:49 +0000 (0:00:00.271) 0:00:00.271 ****** 2026-02-03 03:23:57.072249 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:23:57.072262 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:23:57.072274 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:23:57.072286 | orchestrator | 2026-02-03 03:23:57.072297 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:23:57.072307 | orchestrator | Tuesday 03 February 2026 03:23:49 +0000 (0:00:00.344) 0:00:00.616 ****** 2026-02-03 03:23:57.072320 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-03 03:23:57.072334 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-03 03:23:57.072345 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-03 03:23:57.072358 | orchestrator | 2026-02-03 03:23:57.072370 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-03 03:23:57.072409 | orchestrator | 2026-02-03 03:23:57.072418 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-03 03:23:57.072425 | orchestrator | Tuesday 03 February 2026 03:23:49 +0000 (0:00:00.449) 0:00:01.065 ****** 2026-02-03 03:23:57.072432 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:23:57.072440 | orchestrator | 2026-02-03 03:23:57.072446 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-03 03:23:57.072453 | orchestrator | Tuesday 03 February 2026 03:23:50 +0000 (0:00:00.508) 0:00:01.574 ****** 2026-02-03 03:23:57.072460 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-03 03:23:57.072467 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-03 03:23:57.072474 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-03 03:23:57.072480 | orchestrator | 2026-02-03 03:23:57.072487 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-03 03:23:57.072494 | orchestrator | Tuesday 03 February 2026 03:23:51 +0000 (0:00:00.701) 0:00:02.275 ****** 2026-02-03 03:23:57.072500 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-03 03:23:57.072507 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-03 03:23:57.072514 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-03 03:23:57.072520 | orchestrator | 2026-02-03 03:23:57.072530 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-03 03:23:57.072541 | orchestrator | Tuesday 03 February 2026 03:23:52 +0000 (0:00:01.795) 0:00:04.071 ****** 2026-02-03 03:23:57.072569 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:23:57.072580 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:23:57.072591 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:23:57.072603 | orchestrator | 2026-02-03 03:23:57.072615 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-03 03:23:57.072626 | orchestrator | Tuesday 03 February 2026 03:23:54 +0000 (0:00:01.631) 0:00:05.703 ****** 2026-02-03 03:23:57.072638 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:23:57.072651 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:23:57.072663 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:23:57.072674 | orchestrator | 2026-02-03 03:23:57.072687 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:23:57.072696 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:23:57.072706 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:23:57.072714 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:23:57.072722 | orchestrator | 2026-02-03 03:23:57.072730 | orchestrator | 2026-02-03 03:23:57.072738 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:23:57.072746 | orchestrator | Tuesday 03 February 2026 03:23:56 +0000 (0:00:02.121) 0:00:07.824 ****** 2026-02-03 03:23:57.072754 | orchestrator | =============================================================================== 2026-02-03 03:23:57.072762 | orchestrator | memcached : Restart memcached container --------------------------------- 2.12s 2026-02-03 03:23:57.072770 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.80s 2026-02-03 03:23:57.072778 | orchestrator | memcached : Check memcached container ----------------------------------- 1.63s 2026-02-03 03:23:57.072786 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.70s 2026-02-03 03:23:57.072794 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.51s 2026-02-03 03:23:57.072802 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-02-03 03:23:57.072811 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-03 03:23:59.523329 | orchestrator | 2026-02-03 03:23:59 | INFO  | Task a97207dc-576e-4783-a5de-8232690b34df (redis) was prepared for execution. 2026-02-03 03:23:59.523405 | orchestrator | 2026-02-03 03:23:59 | INFO  | It takes a moment until task a97207dc-576e-4783-a5de-8232690b34df (redis) has been started and output is visible here. 2026-02-03 03:24:08.669236 | orchestrator | 2026-02-03 03:24:08.669329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:24:08.669336 | orchestrator | 2026-02-03 03:24:08.669341 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:24:08.669346 | orchestrator | Tuesday 03 February 2026 03:24:03 +0000 (0:00:00.292) 0:00:00.292 ****** 2026-02-03 03:24:08.669350 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:24:08.669356 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:24:08.669359 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:24:08.669363 | orchestrator | 2026-02-03 03:24:08.669367 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:24:08.669372 | orchestrator | Tuesday 03 February 2026 03:24:04 +0000 (0:00:00.332) 0:00:00.624 ****** 2026-02-03 03:24:08.669376 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-03 03:24:08.669380 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-03 03:24:08.669384 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-03 03:24:08.669388 | orchestrator | 2026-02-03 03:24:08.669391 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-03 03:24:08.669395 | orchestrator | 2026-02-03 03:24:08.669399 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-03 03:24:08.669403 | orchestrator | Tuesday 03 February 2026 03:24:04 +0000 (0:00:00.427) 0:00:01.052 ****** 2026-02-03 03:24:08.669406 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:24:08.669411 | orchestrator | 2026-02-03 03:24:08.669415 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-03 03:24:08.669419 | orchestrator | Tuesday 03 February 2026 03:24:05 +0000 (0:00:00.514) 0:00:01.566 ****** 2026-02-03 03:24:08.669425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669482 | orchestrator | 2026-02-03 03:24:08.669486 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-03 03:24:08.669490 | orchestrator | Tuesday 03 February 2026 03:24:06 +0000 (0:00:01.178) 0:00:02.745 ****** 2026-02-03 03:24:08.669494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:08.669576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.828637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.828735 | orchestrator | 2026-02-03 03:24:12.828750 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-03 03:24:12.828788 | orchestrator | Tuesday 03 February 2026 03:24:08 +0000 (0:00:02.386) 0:00:05.131 ****** 2026-02-03 03:24:12.828801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.828829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.828840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.828875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.828887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.828914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.828954 | orchestrator | 2026-02-03 03:24:12.828972 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-03 03:24:12.828989 | orchestrator | Tuesday 03 February 2026 03:24:11 +0000 (0:00:02.462) 0:00:07.594 ****** 2026-02-03 03:24:12.829005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.829022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.829048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.829073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.829090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:12.829117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 03:24:27.607592 | orchestrator | 2026-02-03 03:24:27.607778 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-03 03:24:27.607801 | orchestrator | Tuesday 03 February 2026 03:24:12 +0000 (0:00:01.486) 0:00:09.080 ****** 2026-02-03 03:24:27.607814 | orchestrator | 2026-02-03 03:24:27.607826 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-03 03:24:27.607837 | orchestrator | Tuesday 03 February 2026 03:24:12 +0000 (0:00:00.064) 0:00:09.145 ****** 2026-02-03 03:24:27.607848 | orchestrator | 2026-02-03 03:24:27.607860 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-03 03:24:27.607871 | orchestrator | Tuesday 03 February 2026 03:24:12 +0000 (0:00:00.065) 0:00:09.210 ****** 2026-02-03 03:24:27.607882 | orchestrator | 2026-02-03 03:24:27.607939 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-03 03:24:27.607953 | orchestrator | Tuesday 03 February 2026 03:24:12 +0000 (0:00:00.084) 0:00:09.295 ****** 2026-02-03 03:24:27.607969 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:24:27.607990 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:24:27.608009 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:24:27.608029 | orchestrator | 2026-02-03 03:24:27.608049 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-03 03:24:27.608071 | orchestrator | Tuesday 03 February 2026 03:24:19 +0000 (0:00:06.587) 0:00:15.882 ****** 2026-02-03 03:24:27.608117 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:24:27.608131 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:24:27.608143 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:24:27.608162 | orchestrator | 2026-02-03 03:24:27.608182 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:24:27.608203 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:24:27.608224 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:24:27.608266 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:24:27.608283 | orchestrator | 2026-02-03 03:24:27.608300 | orchestrator | 2026-02-03 03:24:27.608318 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:24:27.608337 | orchestrator | Tuesday 03 February 2026 03:24:27 +0000 (0:00:07.661) 0:00:23.544 ****** 2026-02-03 03:24:27.608356 | orchestrator | =============================================================================== 2026-02-03 03:24:27.608375 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.66s 2026-02-03 03:24:27.608394 | orchestrator | redis : Restart redis container ----------------------------------------- 6.59s 2026-02-03 03:24:27.608413 | orchestrator | redis : Copying over redis config files --------------------------------- 2.46s 2026-02-03 03:24:27.608435 | orchestrator | redis : Copying over default config.json files -------------------------- 2.39s 2026-02-03 03:24:27.608455 | orchestrator | redis : Check redis containers ------------------------------------------ 1.49s 2026-02-03 03:24:27.608476 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.18s 2026-02-03 03:24:27.608496 | orchestrator | redis : include_tasks --------------------------------------------------- 0.51s 2026-02-03 03:24:27.608515 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-02-03 03:24:27.608533 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-02-03 03:24:27.608551 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2026-02-03 03:24:29.818589 | orchestrator | 2026-02-03 03:24:29 | INFO  | Task 503dc600-ffa0-4533-8fc0-f05d63419e1e (mariadb) was prepared for execution. 2026-02-03 03:24:29.818703 | orchestrator | 2026-02-03 03:24:29 | INFO  | It takes a moment until task 503dc600-ffa0-4533-8fc0-f05d63419e1e (mariadb) has been started and output is visible here. 2026-02-03 03:24:43.318432 | orchestrator | 2026-02-03 03:24:43.318551 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:24:43.318559 | orchestrator | 2026-02-03 03:24:43.318565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:24:43.318570 | orchestrator | Tuesday 03 February 2026 03:24:33 +0000 (0:00:00.181) 0:00:00.181 ****** 2026-02-03 03:24:43.318575 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:24:43.318582 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:24:43.318586 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:24:43.318590 | orchestrator | 2026-02-03 03:24:43.318595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:24:43.318600 | orchestrator | Tuesday 03 February 2026 03:24:34 +0000 (0:00:00.333) 0:00:00.515 ****** 2026-02-03 03:24:43.318604 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-03 03:24:43.318610 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-03 03:24:43.318614 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-03 03:24:43.318618 | orchestrator | 2026-02-03 03:24:43.318622 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-03 03:24:43.318626 | orchestrator | 2026-02-03 03:24:43.318631 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-03 03:24:43.318654 | orchestrator | Tuesday 03 February 2026 03:24:34 +0000 (0:00:00.598) 0:00:01.113 ****** 2026-02-03 03:24:43.318659 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 03:24:43.318664 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-03 03:24:43.318668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-03 03:24:43.318672 | orchestrator | 2026-02-03 03:24:43.318676 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-03 03:24:43.318680 | orchestrator | Tuesday 03 February 2026 03:24:35 +0000 (0:00:00.434) 0:00:01.548 ****** 2026-02-03 03:24:43.318685 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:24:43.318691 | orchestrator | 2026-02-03 03:24:43.318695 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-03 03:24:43.318699 | orchestrator | Tuesday 03 February 2026 03:24:35 +0000 (0:00:00.541) 0:00:02.089 ****** 2026-02-03 03:24:43.318727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 03:24:43.318763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 03:24:43.318784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 03:24:43.318792 | orchestrator | 2026-02-03 03:24:43.318798 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-03 03:24:43.318805 | orchestrator | Tuesday 03 February 2026 03:24:38 +0000 (0:00:02.547) 0:00:04.637 ****** 2026-02-03 03:24:43.318856 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:24:43.318867 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:24:43.318873 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:24:43.318880 | orchestrator | 2026-02-03 03:24:43.318886 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-03 03:24:43.318892 | orchestrator | Tuesday 03 February 2026 03:24:38 +0000 (0:00:00.642) 0:00:05.280 ****** 2026-02-03 03:24:43.318898 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:24:43.318905 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:24:43.318911 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:24:43.318917 | orchestrator | 2026-02-03 03:24:43.318923 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-03 03:24:43.318928 | orchestrator | Tuesday 03 February 2026 03:24:40 +0000 (0:00:01.443) 0:00:06.724 ****** 2026-02-03 03:24:43.318942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 03:24:51.018108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 03:24:51.018238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 03:24:51.018287 | orchestrator | 2026-02-03 03:24:51.018306 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-03 03:24:51.018323 | orchestrator | Tuesday 03 February 2026 03:24:43 +0000 (0:00:03.055) 0:00:09.779 ****** 2026-02-03 03:24:51.018337 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:24:51.018352 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:24:51.018366 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:24:51.018380 | orchestrator | 2026-02-03 03:24:51.018394 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-03 03:24:51.018427 | orchestrator | Tuesday 03 February 2026 03:24:44 +0000 (0:00:01.111) 0:00:10.891 ****** 2026-02-03 03:24:51.018441 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:24:51.018455 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:24:51.018469 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:24:51.018483 | orchestrator | 2026-02-03 03:24:51.018497 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-03 03:24:51.018511 | orchestrator | Tuesday 03 February 2026 03:24:48 +0000 (0:00:03.773) 0:00:14.665 ****** 2026-02-03 03:24:51.018526 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:24:51.018540 | orchestrator | 2026-02-03 03:24:51.018556 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-03 03:24:51.018571 | orchestrator | Tuesday 03 February 2026 03:24:48 +0000 (0:00:00.543) 0:00:15.208 ****** 2026-02-03 03:24:51.018595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:24:51.018621 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:24:51.018649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:24:55.765372 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:24:55.765465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:24:55.765494 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:24:55.765501 | orchestrator | 2026-02-03 03:24:55.765508 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-03 03:24:55.765515 | orchestrator | Tuesday 03 February 2026 03:24:51 +0000 (0:00:02.271) 0:00:17.480 ****** 2026-02-03 03:24:55.765521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:24:55.765527 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:24:55.765549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:24:55.765562 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:24:55.765568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:24:55.765578 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:24:55.765586 | orchestrator | 2026-02-03 03:24:55.765595 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-03 03:24:55.765604 | orchestrator | Tuesday 03 February 2026 03:24:53 +0000 (0:00:02.426) 0:00:19.907 ****** 2026-02-03 03:24:55.765623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:24:58.517079 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:24:58.517180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:24:58.517195 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:24:58.517218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 03:24:58.517248 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:24:58.517255 | orchestrator | 2026-02-03 03:24:58.517264 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-03 03:24:58.517272 | orchestrator | Tuesday 03 February 2026 03:24:55 +0000 (0:00:02.324) 0:00:22.231 ****** 2026-02-03 03:24:58.517296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 03:24:58.517305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 03:24:58.517323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 03:27:14.658632 | orchestrator | 2026-02-03 03:27:14.658743 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-03 03:27:14.658752 | orchestrator | Tuesday 03 February 2026 03:24:58 +0000 (0:00:02.747) 0:00:24.978 ****** 2026-02-03 03:27:14.658757 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:27:14.658763 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:27:14.658767 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:27:14.658771 | orchestrator | 2026-02-03 03:27:14.658776 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-03 03:27:14.658780 | orchestrator | Tuesday 03 February 2026 03:24:59 +0000 (0:00:00.868) 0:00:25.846 ****** 2026-02-03 03:27:14.658784 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:14.658790 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:27:14.658794 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:27:14.658798 | orchestrator | 2026-02-03 03:27:14.658802 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-03 03:27:14.658806 | orchestrator | Tuesday 03 February 2026 03:24:59 +0000 (0:00:00.550) 0:00:26.396 ****** 2026-02-03 03:27:14.658810 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:14.658813 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:27:14.658817 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:27:14.658821 | orchestrator | 2026-02-03 03:27:14.658825 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-03 03:27:14.658829 | orchestrator | Tuesday 03 February 2026 03:25:00 +0000 (0:00:00.327) 0:00:26.724 ****** 2026-02-03 03:27:14.658834 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-03 03:27:14.658839 | orchestrator | ...ignoring 2026-02-03 03:27:14.658843 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-03 03:27:14.658847 | orchestrator | ...ignoring 2026-02-03 03:27:14.658851 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-03 03:27:14.658855 | orchestrator | ...ignoring 2026-02-03 03:27:14.658875 | orchestrator | 2026-02-03 03:27:14.658879 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-03 03:27:14.658882 | orchestrator | Tuesday 03 February 2026 03:25:11 +0000 (0:00:10.900) 0:00:37.624 ****** 2026-02-03 03:27:14.658886 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:14.658890 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:27:14.658894 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:27:14.658897 | orchestrator | 2026-02-03 03:27:14.658901 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-03 03:27:14.658905 | orchestrator | Tuesday 03 February 2026 03:25:11 +0000 (0:00:00.474) 0:00:38.099 ****** 2026-02-03 03:27:14.658909 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:14.658913 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:14.658916 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:14.658920 | orchestrator | 2026-02-03 03:27:14.658924 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-03 03:27:14.658928 | orchestrator | Tuesday 03 February 2026 03:25:12 +0000 (0:00:00.699) 0:00:38.799 ****** 2026-02-03 03:27:14.658931 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:14.658935 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:14.658939 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:14.658943 | orchestrator | 2026-02-03 03:27:14.658956 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-03 03:27:14.658961 | orchestrator | Tuesday 03 February 2026 03:25:12 +0000 (0:00:00.420) 0:00:39.219 ****** 2026-02-03 03:27:14.658965 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:14.658968 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:14.658972 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:14.658976 | orchestrator | 2026-02-03 03:27:14.658980 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-03 03:27:14.658984 | orchestrator | Tuesday 03 February 2026 03:25:13 +0000 (0:00:00.474) 0:00:39.693 ****** 2026-02-03 03:27:14.658991 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:14.658997 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:27:14.659003 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:27:14.659009 | orchestrator | 2026-02-03 03:27:14.659015 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-03 03:27:14.659024 | orchestrator | Tuesday 03 February 2026 03:25:13 +0000 (0:00:00.446) 0:00:40.140 ****** 2026-02-03 03:27:14.659028 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:14.659032 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:14.659036 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:14.659040 | orchestrator | 2026-02-03 03:27:14.659043 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-03 03:27:14.659047 | orchestrator | Tuesday 03 February 2026 03:25:14 +0000 (0:00:00.648) 0:00:40.788 ****** 2026-02-03 03:27:14.659051 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:14.659055 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:14.659059 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-03 03:27:14.659063 | orchestrator | 2026-02-03 03:27:14.659067 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-03 03:27:14.659071 | orchestrator | Tuesday 03 February 2026 03:25:14 +0000 (0:00:00.490) 0:00:41.278 ****** 2026-02-03 03:27:14.659074 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:27:14.659078 | orchestrator | 2026-02-03 03:27:14.659082 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-03 03:27:14.659086 | orchestrator | Tuesday 03 February 2026 03:25:25 +0000 (0:00:10.515) 0:00:51.794 ****** 2026-02-03 03:27:14.659089 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:14.659093 | orchestrator | 2026-02-03 03:27:14.659097 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-03 03:27:14.659101 | orchestrator | Tuesday 03 February 2026 03:25:25 +0000 (0:00:00.122) 0:00:51.916 ****** 2026-02-03 03:27:14.659105 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:14.659124 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:14.659128 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:14.659132 | orchestrator | 2026-02-03 03:27:14.659136 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-03 03:27:14.659140 | orchestrator | Tuesday 03 February 2026 03:25:26 +0000 (0:00:01.005) 0:00:52.922 ****** 2026-02-03 03:27:14.659143 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:27:14.659147 | orchestrator | 2026-02-03 03:27:14.659151 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-03 03:27:14.659155 | orchestrator | Tuesday 03 February 2026 03:25:34 +0000 (0:00:07.833) 0:01:00.756 ****** 2026-02-03 03:27:14.659159 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:14.659162 | orchestrator | 2026-02-03 03:27:14.659166 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-03 03:27:14.659170 | orchestrator | Tuesday 03 February 2026 03:25:35 +0000 (0:00:01.610) 0:01:02.366 ****** 2026-02-03 03:27:14.659204 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:14.659209 | orchestrator | 2026-02-03 03:27:14.659213 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-03 03:27:14.659218 | orchestrator | Tuesday 03 February 2026 03:25:38 +0000 (0:00:02.608) 0:01:04.975 ****** 2026-02-03 03:27:14.659222 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:27:14.659227 | orchestrator | 2026-02-03 03:27:14.659232 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-03 03:27:14.659237 | orchestrator | Tuesday 03 February 2026 03:25:38 +0000 (0:00:00.139) 0:01:05.115 ****** 2026-02-03 03:27:14.659244 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:14.659250 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:14.659257 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:14.659264 | orchestrator | 2026-02-03 03:27:14.659270 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-03 03:27:14.659278 | orchestrator | Tuesday 03 February 2026 03:25:38 +0000 (0:00:00.325) 0:01:05.440 ****** 2026-02-03 03:27:14.659283 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:14.659287 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-03 03:27:14.659292 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:27:14.659296 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:27:14.659300 | orchestrator | 2026-02-03 03:27:14.659305 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-03 03:27:14.659311 | orchestrator | skipping: no hosts matched 2026-02-03 03:27:14.659318 | orchestrator | 2026-02-03 03:27:14.659324 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-03 03:27:14.659331 | orchestrator | 2026-02-03 03:27:14.659345 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-03 03:27:14.659351 | orchestrator | Tuesday 03 February 2026 03:25:39 +0000 (0:00:00.589) 0:01:06.030 ****** 2026-02-03 03:27:14.659359 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:27:14.659364 | orchestrator | 2026-02-03 03:27:14.659368 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-03 03:27:14.659373 | orchestrator | Tuesday 03 February 2026 03:25:57 +0000 (0:00:17.977) 0:01:24.007 ****** 2026-02-03 03:27:14.659378 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:27:14.659382 | orchestrator | 2026-02-03 03:27:14.659386 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-03 03:27:14.659390 | orchestrator | Tuesday 03 February 2026 03:26:14 +0000 (0:00:16.573) 0:01:40.580 ****** 2026-02-03 03:27:14.659395 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:27:14.659399 | orchestrator | 2026-02-03 03:27:14.659407 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-03 03:27:14.659411 | orchestrator | 2026-02-03 03:27:14.659420 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-03 03:27:14.659424 | orchestrator | Tuesday 03 February 2026 03:26:16 +0000 (0:00:02.490) 0:01:43.071 ****** 2026-02-03 03:27:14.659432 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:27:14.659437 | orchestrator | 2026-02-03 03:27:14.659442 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-03 03:27:14.659446 | orchestrator | Tuesday 03 February 2026 03:26:34 +0000 (0:00:18.014) 0:02:01.085 ****** 2026-02-03 03:27:14.659450 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:27:14.659455 | orchestrator | 2026-02-03 03:27:14.659459 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-03 03:27:14.659464 | orchestrator | Tuesday 03 February 2026 03:26:51 +0000 (0:00:16.608) 0:02:17.694 ****** 2026-02-03 03:27:14.659468 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:27:14.659472 | orchestrator | 2026-02-03 03:27:14.659477 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-03 03:27:14.659481 | orchestrator | 2026-02-03 03:27:14.659486 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-03 03:27:14.659490 | orchestrator | Tuesday 03 February 2026 03:26:53 +0000 (0:00:02.548) 0:02:20.242 ****** 2026-02-03 03:27:14.659494 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:27:14.659499 | orchestrator | 2026-02-03 03:27:14.659503 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-03 03:27:14.659507 | orchestrator | Tuesday 03 February 2026 03:27:05 +0000 (0:00:11.982) 0:02:32.225 ****** 2026-02-03 03:27:14.659512 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:14.659516 | orchestrator | 2026-02-03 03:27:14.659521 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-03 03:27:14.659525 | orchestrator | Tuesday 03 February 2026 03:27:11 +0000 (0:00:05.602) 0:02:37.827 ****** 2026-02-03 03:27:14.659530 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:14.659534 | orchestrator | 2026-02-03 03:27:14.659539 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-03 03:27:14.659543 | orchestrator | 2026-02-03 03:27:14.659548 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-03 03:27:14.659552 | orchestrator | Tuesday 03 February 2026 03:27:13 +0000 (0:00:02.580) 0:02:40.408 ****** 2026-02-03 03:27:14.659557 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:27:14.659561 | orchestrator | 2026-02-03 03:27:14.659565 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-03 03:27:14.659572 | orchestrator | Tuesday 03 February 2026 03:27:14 +0000 (0:00:00.711) 0:02:41.120 ****** 2026-02-03 03:27:27.322402 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:27.322520 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:27.322540 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:27:27.322556 | orchestrator | 2026-02-03 03:27:27.322573 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-03 03:27:27.322589 | orchestrator | Tuesday 03 February 2026 03:27:16 +0000 (0:00:02.332) 0:02:43.452 ****** 2026-02-03 03:27:27.322604 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:27.322617 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:27.322629 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:27:27.322642 | orchestrator | 2026-02-03 03:27:27.322657 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-03 03:27:27.322670 | orchestrator | Tuesday 03 February 2026 03:27:19 +0000 (0:00:02.130) 0:02:45.583 ****** 2026-02-03 03:27:27.322683 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:27.322696 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:27.322709 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:27:27.322722 | orchestrator | 2026-02-03 03:27:27.322736 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-03 03:27:27.322750 | orchestrator | Tuesday 03 February 2026 03:27:21 +0000 (0:00:02.402) 0:02:47.985 ****** 2026-02-03 03:27:27.322765 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:27.322780 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:27.322795 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:27:27.322829 | orchestrator | 2026-02-03 03:27:27.322892 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-03 03:27:27.322991 | orchestrator | Tuesday 03 February 2026 03:27:23 +0000 (0:00:02.094) 0:02:50.080 ****** 2026-02-03 03:27:27.323005 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:27.323016 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:27:27.323026 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:27:27.323037 | orchestrator | 2026-02-03 03:27:27.323047 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-03 03:27:27.323059 | orchestrator | Tuesday 03 February 2026 03:27:26 +0000 (0:00:02.948) 0:02:53.029 ****** 2026-02-03 03:27:27.323068 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:27.323078 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:27:27.323088 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:27:27.323096 | orchestrator | 2026-02-03 03:27:27.323105 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:27:27.323115 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-03 03:27:27.323126 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-03 03:27:27.323164 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-03 03:27:27.323174 | orchestrator | 2026-02-03 03:27:27.323183 | orchestrator | 2026-02-03 03:27:27.323192 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:27:27.323200 | orchestrator | Tuesday 03 February 2026 03:27:26 +0000 (0:00:00.234) 0:02:53.264 ****** 2026-02-03 03:27:27.323209 | orchestrator | =============================================================================== 2026-02-03 03:27:27.323233 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.99s 2026-02-03 03:27:27.323243 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.18s 2026-02-03 03:27:27.323251 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.98s 2026-02-03 03:27:27.323260 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2026-02-03 03:27:27.323268 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.52s 2026-02-03 03:27:27.323276 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.83s 2026-02-03 03:27:27.323286 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.60s 2026-02-03 03:27:27.323294 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.04s 2026-02-03 03:27:27.323303 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.77s 2026-02-03 03:27:27.323312 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.06s 2026-02-03 03:27:27.323320 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.95s 2026-02-03 03:27:27.323329 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.75s 2026-02-03 03:27:27.323337 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.61s 2026-02-03 03:27:27.323346 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.58s 2026-02-03 03:27:27.323354 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.55s 2026-02-03 03:27:27.323363 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.43s 2026-02-03 03:27:27.323372 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.40s 2026-02-03 03:27:27.323381 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.33s 2026-02-03 03:27:27.323390 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.32s 2026-02-03 03:27:27.323399 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.27s 2026-02-03 03:27:29.866933 | orchestrator | 2026-02-03 03:27:29 | INFO  | Task b5a2696e-28e4-4185-bde8-0320dc023169 (rabbitmq) was prepared for execution. 2026-02-03 03:27:29.867004 | orchestrator | 2026-02-03 03:27:29 | INFO  | It takes a moment until task b5a2696e-28e4-4185-bde8-0320dc023169 (rabbitmq) has been started and output is visible here. 2026-02-03 03:27:43.658923 | orchestrator | 2026-02-03 03:27:43.659003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:27:43.659009 | orchestrator | 2026-02-03 03:27:43.659014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:27:43.659019 | orchestrator | Tuesday 03 February 2026 03:27:34 +0000 (0:00:00.210) 0:00:00.210 ****** 2026-02-03 03:27:43.659023 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:43.659029 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:27:43.659033 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:27:43.659037 | orchestrator | 2026-02-03 03:27:43.659041 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:27:43.659045 | orchestrator | Tuesday 03 February 2026 03:27:34 +0000 (0:00:00.307) 0:00:00.517 ****** 2026-02-03 03:27:43.659049 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-03 03:27:43.659054 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-03 03:27:43.659058 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-03 03:27:43.659062 | orchestrator | 2026-02-03 03:27:43.659065 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-03 03:27:43.659070 | orchestrator | 2026-02-03 03:27:43.659113 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-03 03:27:43.659119 | orchestrator | Tuesday 03 February 2026 03:27:35 +0000 (0:00:00.593) 0:00:01.110 ****** 2026-02-03 03:27:43.659126 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:27:43.659134 | orchestrator | 2026-02-03 03:27:43.659140 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-03 03:27:43.659146 | orchestrator | Tuesday 03 February 2026 03:27:35 +0000 (0:00:00.558) 0:00:01.669 ****** 2026-02-03 03:27:43.659152 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:43.659158 | orchestrator | 2026-02-03 03:27:43.659164 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-03 03:27:43.659170 | orchestrator | Tuesday 03 February 2026 03:27:36 +0000 (0:00:01.043) 0:00:02.713 ****** 2026-02-03 03:27:43.659176 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:43.659184 | orchestrator | 2026-02-03 03:27:43.659189 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-03 03:27:43.659195 | orchestrator | Tuesday 03 February 2026 03:27:37 +0000 (0:00:00.382) 0:00:03.095 ****** 2026-02-03 03:27:43.659201 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:43.659208 | orchestrator | 2026-02-03 03:27:43.659214 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-03 03:27:43.659219 | orchestrator | Tuesday 03 February 2026 03:27:37 +0000 (0:00:00.381) 0:00:03.476 ****** 2026-02-03 03:27:43.659225 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:43.659232 | orchestrator | 2026-02-03 03:27:43.659238 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-03 03:27:43.659245 | orchestrator | Tuesday 03 February 2026 03:27:37 +0000 (0:00:00.376) 0:00:03.853 ****** 2026-02-03 03:27:43.659251 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:43.659257 | orchestrator | 2026-02-03 03:27:43.659263 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-03 03:27:43.659269 | orchestrator | Tuesday 03 February 2026 03:27:38 +0000 (0:00:00.619) 0:00:04.472 ****** 2026-02-03 03:27:43.659291 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:27:43.659318 | orchestrator | 2026-02-03 03:27:43.659325 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-03 03:27:43.659332 | orchestrator | Tuesday 03 February 2026 03:27:39 +0000 (0:00:00.931) 0:00:05.404 ****** 2026-02-03 03:27:43.659338 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:27:43.659345 | orchestrator | 2026-02-03 03:27:43.659351 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-03 03:27:43.659358 | orchestrator | Tuesday 03 February 2026 03:27:40 +0000 (0:00:00.850) 0:00:06.254 ****** 2026-02-03 03:27:43.659365 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:43.659371 | orchestrator | 2026-02-03 03:27:43.659377 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-03 03:27:43.659383 | orchestrator | Tuesday 03 February 2026 03:27:40 +0000 (0:00:00.383) 0:00:06.638 ****** 2026-02-03 03:27:43.659389 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:27:43.659395 | orchestrator | 2026-02-03 03:27:43.659401 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-03 03:27:43.659408 | orchestrator | Tuesday 03 February 2026 03:27:41 +0000 (0:00:00.381) 0:00:07.020 ****** 2026-02-03 03:27:43.659433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:27:43.659439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:27:43.659444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:27:43.659453 | orchestrator | 2026-02-03 03:27:43.659462 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-03 03:27:43.659466 | orchestrator | Tuesday 03 February 2026 03:27:41 +0000 (0:00:00.811) 0:00:07.832 ****** 2026-02-03 03:27:43.659471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:27:43.659481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:28:04.228169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:28:04.228273 | orchestrator | 2026-02-03 03:28:04.228284 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-03 03:28:04.228292 | orchestrator | Tuesday 03 February 2026 03:27:43 +0000 (0:00:01.674) 0:00:09.506 ****** 2026-02-03 03:28:04.228320 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-03 03:28:04.228328 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-03 03:28:04.228334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-03 03:28:04.228340 | orchestrator | 2026-02-03 03:28:04.228348 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-03 03:28:04.228354 | orchestrator | Tuesday 03 February 2026 03:27:45 +0000 (0:00:01.481) 0:00:10.987 ****** 2026-02-03 03:28:04.228373 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-03 03:28:04.228380 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-03 03:28:04.228386 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-03 03:28:04.228391 | orchestrator | 2026-02-03 03:28:04.228398 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-03 03:28:04.228404 | orchestrator | Tuesday 03 February 2026 03:27:46 +0000 (0:00:01.676) 0:00:12.664 ****** 2026-02-03 03:28:04.228410 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-03 03:28:04.228416 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-03 03:28:04.228422 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-03 03:28:04.228427 | orchestrator | 2026-02-03 03:28:04.228433 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-03 03:28:04.228439 | orchestrator | Tuesday 03 February 2026 03:27:48 +0000 (0:00:01.324) 0:00:13.989 ****** 2026-02-03 03:28:04.228445 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-03 03:28:04.228452 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-03 03:28:04.228457 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-03 03:28:04.228463 | orchestrator | 2026-02-03 03:28:04.228469 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-03 03:28:04.228475 | orchestrator | Tuesday 03 February 2026 03:27:49 +0000 (0:00:01.752) 0:00:15.741 ****** 2026-02-03 03:28:04.228480 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-03 03:28:04.228486 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-03 03:28:04.228492 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-03 03:28:04.228498 | orchestrator | 2026-02-03 03:28:04.228504 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-03 03:28:04.228511 | orchestrator | Tuesday 03 February 2026 03:27:51 +0000 (0:00:01.507) 0:00:17.248 ****** 2026-02-03 03:28:04.228517 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-03 03:28:04.228523 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-03 03:28:04.228529 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-03 03:28:04.228534 | orchestrator | 2026-02-03 03:28:04.228540 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-03 03:28:04.228545 | orchestrator | Tuesday 03 February 2026 03:27:52 +0000 (0:00:01.403) 0:00:18.652 ****** 2026-02-03 03:28:04.228551 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:28:04.228558 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:28:04.228582 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:28:04.228596 | orchestrator | 2026-02-03 03:28:04.228602 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-03 03:28:04.228608 | orchestrator | Tuesday 03 February 2026 03:27:53 +0000 (0:00:00.420) 0:00:19.072 ****** 2026-02-03 03:28:04.228615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:28:04.228627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:28:04.228633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 03:28:04.228640 | orchestrator | 2026-02-03 03:28:04.228645 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-03 03:28:04.228651 | orchestrator | Tuesday 03 February 2026 03:27:54 +0000 (0:00:01.335) 0:00:20.408 ****** 2026-02-03 03:28:04.228658 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:28:04.228663 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:28:04.228670 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:28:04.228676 | orchestrator | 2026-02-03 03:28:04.228682 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-03 03:28:04.228694 | orchestrator | Tuesday 03 February 2026 03:27:55 +0000 (0:00:00.986) 0:00:21.395 ****** 2026-02-03 03:28:04.228699 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:28:04.228706 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:28:04.228712 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:28:04.228717 | orchestrator | 2026-02-03 03:28:04.228724 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-03 03:28:04.228734 | orchestrator | Tuesday 03 February 2026 03:28:04 +0000 (0:00:08.678) 0:00:30.073 ****** 2026-02-03 03:29:42.595911 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:29:42.596011 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:29:42.596022 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:29:42.596029 | orchestrator | 2026-02-03 03:29:42.596039 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-03 03:29:42.596047 | orchestrator | 2026-02-03 03:29:42.596055 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-03 03:29:42.596062 | orchestrator | Tuesday 03 February 2026 03:28:04 +0000 (0:00:00.595) 0:00:30.668 ****** 2026-02-03 03:29:42.596069 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:29:42.596077 | orchestrator | 2026-02-03 03:29:42.596083 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-03 03:29:42.596090 | orchestrator | Tuesday 03 February 2026 03:28:05 +0000 (0:00:00.627) 0:00:31.296 ****** 2026-02-03 03:29:42.596096 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:29:42.596103 | orchestrator | 2026-02-03 03:29:42.596109 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-03 03:29:42.596116 | orchestrator | Tuesday 03 February 2026 03:28:05 +0000 (0:00:00.247) 0:00:31.543 ****** 2026-02-03 03:29:42.596122 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:29:42.596129 | orchestrator | 2026-02-03 03:29:42.596136 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-03 03:29:42.596142 | orchestrator | Tuesday 03 February 2026 03:28:07 +0000 (0:00:01.649) 0:00:33.193 ****** 2026-02-03 03:29:42.596148 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:29:42.596155 | orchestrator | 2026-02-03 03:29:42.596160 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-03 03:29:42.596167 | orchestrator | 2026-02-03 03:29:42.596172 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-03 03:29:42.596179 | orchestrator | Tuesday 03 February 2026 03:29:02 +0000 (0:00:55.324) 0:01:28.517 ****** 2026-02-03 03:29:42.596185 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:29:42.596191 | orchestrator | 2026-02-03 03:29:42.596198 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-03 03:29:42.596204 | orchestrator | Tuesday 03 February 2026 03:29:03 +0000 (0:00:00.633) 0:01:29.150 ****** 2026-02-03 03:29:42.596210 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:29:42.596216 | orchestrator | 2026-02-03 03:29:42.596223 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-03 03:29:42.596229 | orchestrator | Tuesday 03 February 2026 03:29:03 +0000 (0:00:00.237) 0:01:29.388 ****** 2026-02-03 03:29:42.596236 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:29:42.596243 | orchestrator | 2026-02-03 03:29:42.596250 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-03 03:29:42.596271 | orchestrator | Tuesday 03 February 2026 03:29:05 +0000 (0:00:01.604) 0:01:30.992 ****** 2026-02-03 03:29:42.596278 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:29:42.596284 | orchestrator | 2026-02-03 03:29:42.596290 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-03 03:29:42.596297 | orchestrator | 2026-02-03 03:29:42.596303 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-03 03:29:42.596309 | orchestrator | Tuesday 03 February 2026 03:29:20 +0000 (0:00:15.673) 0:01:46.665 ****** 2026-02-03 03:29:42.596315 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:29:42.596322 | orchestrator | 2026-02-03 03:29:42.596349 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-03 03:29:42.596356 | orchestrator | Tuesday 03 February 2026 03:29:21 +0000 (0:00:00.836) 0:01:47.501 ****** 2026-02-03 03:29:42.596362 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:29:42.596368 | orchestrator | 2026-02-03 03:29:42.596374 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-03 03:29:42.596381 | orchestrator | Tuesday 03 February 2026 03:29:21 +0000 (0:00:00.251) 0:01:47.753 ****** 2026-02-03 03:29:42.596387 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:29:42.596394 | orchestrator | 2026-02-03 03:29:42.596401 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-03 03:29:42.596407 | orchestrator | Tuesday 03 February 2026 03:29:23 +0000 (0:00:01.703) 0:01:49.457 ****** 2026-02-03 03:29:42.596414 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:29:42.596420 | orchestrator | 2026-02-03 03:29:42.596426 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-03 03:29:42.596432 | orchestrator | 2026-02-03 03:29:42.596439 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-03 03:29:42.596445 | orchestrator | Tuesday 03 February 2026 03:29:39 +0000 (0:00:15.725) 0:02:05.183 ****** 2026-02-03 03:29:42.596451 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:29:42.596457 | orchestrator | 2026-02-03 03:29:42.596463 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-03 03:29:42.596470 | orchestrator | Tuesday 03 February 2026 03:29:39 +0000 (0:00:00.521) 0:02:05.704 ****** 2026-02-03 03:29:42.596476 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-03 03:29:42.596483 | orchestrator | enable_outward_rabbitmq_True 2026-02-03 03:29:42.596489 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-03 03:29:42.596495 | orchestrator | outward_rabbitmq_restart 2026-02-03 03:29:42.596502 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:29:42.596509 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:29:42.596515 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:29:42.596521 | orchestrator | 2026-02-03 03:29:42.596528 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-03 03:29:42.596535 | orchestrator | skipping: no hosts matched 2026-02-03 03:29:42.596541 | orchestrator | 2026-02-03 03:29:42.596547 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-03 03:29:42.596555 | orchestrator | skipping: no hosts matched 2026-02-03 03:29:42.596560 | orchestrator | 2026-02-03 03:29:42.596565 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-03 03:29:42.596569 | orchestrator | skipping: no hosts matched 2026-02-03 03:29:42.596574 | orchestrator | 2026-02-03 03:29:42.596578 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:29:42.596595 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-03 03:29:42.596602 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:29:42.596606 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:29:42.596611 | orchestrator | 2026-02-03 03:29:42.596615 | orchestrator | 2026-02-03 03:29:42.596620 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:29:42.596624 | orchestrator | Tuesday 03 February 2026 03:29:42 +0000 (0:00:02.385) 0:02:08.089 ****** 2026-02-03 03:29:42.596629 | orchestrator | =============================================================================== 2026-02-03 03:29:42.596633 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.72s 2026-02-03 03:29:42.596637 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.68s 2026-02-03 03:29:42.596649 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.96s 2026-02-03 03:29:42.596654 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.39s 2026-02-03 03:29:42.596658 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.10s 2026-02-03 03:29:42.596663 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.75s 2026-02-03 03:29:42.596667 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.68s 2026-02-03 03:29:42.596672 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.67s 2026-02-03 03:29:42.596676 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.51s 2026-02-03 03:29:42.596681 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.48s 2026-02-03 03:29:42.596684 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.40s 2026-02-03 03:29:42.596688 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.34s 2026-02-03 03:29:42.596692 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.32s 2026-02-03 03:29:42.596696 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.04s 2026-02-03 03:29:42.596703 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.99s 2026-02-03 03:29:42.596707 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.93s 2026-02-03 03:29:42.596745 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.85s 2026-02-03 03:29:42.596750 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.81s 2026-02-03 03:29:42.596754 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.74s 2026-02-03 03:29:42.596758 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.62s 2026-02-03 03:29:45.070881 | orchestrator | 2026-02-03 03:29:45 | INFO  | Task 4c1a69ae-95ab-440b-9c09-2e91195551c9 (openvswitch) was prepared for execution. 2026-02-03 03:29:45.070952 | orchestrator | 2026-02-03 03:29:45 | INFO  | It takes a moment until task 4c1a69ae-95ab-440b-9c09-2e91195551c9 (openvswitch) has been started and output is visible here. 2026-02-03 03:29:58.070354 | orchestrator | 2026-02-03 03:29:58.070442 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:29:58.070453 | orchestrator | 2026-02-03 03:29:58.070459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:29:58.070466 | orchestrator | Tuesday 03 February 2026 03:29:49 +0000 (0:00:00.277) 0:00:00.277 ****** 2026-02-03 03:29:58.070472 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:29:58.070479 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:29:58.070484 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:29:58.070490 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:29:58.070495 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:29:58.070501 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:29:58.070507 | orchestrator | 2026-02-03 03:29:58.070512 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:29:58.070518 | orchestrator | Tuesday 03 February 2026 03:29:50 +0000 (0:00:00.769) 0:00:01.047 ****** 2026-02-03 03:29:58.070524 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 03:29:58.070531 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 03:29:58.070536 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 03:29:58.070542 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 03:29:58.070551 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 03:29:58.070561 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 03:29:58.070570 | orchestrator | 2026-02-03 03:29:58.070604 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-03 03:29:58.070614 | orchestrator | 2026-02-03 03:29:58.070624 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-03 03:29:58.070633 | orchestrator | Tuesday 03 February 2026 03:29:50 +0000 (0:00:00.654) 0:00:01.702 ****** 2026-02-03 03:29:58.070642 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:29:58.070652 | orchestrator | 2026-02-03 03:29:58.070661 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-03 03:29:58.070721 | orchestrator | Tuesday 03 February 2026 03:29:52 +0000 (0:00:01.180) 0:00:02.883 ****** 2026-02-03 03:29:58.070739 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-03 03:29:58.070746 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-03 03:29:58.070752 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-03 03:29:58.070757 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-03 03:29:58.070763 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-03 03:29:58.070769 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-03 03:29:58.070774 | orchestrator | 2026-02-03 03:29:58.070780 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-03 03:29:58.070786 | orchestrator | Tuesday 03 February 2026 03:29:53 +0000 (0:00:01.173) 0:00:04.056 ****** 2026-02-03 03:29:58.070792 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-03 03:29:58.070797 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-03 03:29:58.070803 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-03 03:29:58.070808 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-03 03:29:58.070814 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-03 03:29:58.070819 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-03 03:29:58.070825 | orchestrator | 2026-02-03 03:29:58.070831 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-03 03:29:58.070836 | orchestrator | Tuesday 03 February 2026 03:29:54 +0000 (0:00:01.526) 0:00:05.583 ****** 2026-02-03 03:29:58.070842 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-03 03:29:58.070848 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:29:58.070855 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-03 03:29:58.070860 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:29:58.070866 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-03 03:29:58.070871 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:29:58.070877 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-03 03:29:58.070883 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:29:58.070888 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-03 03:29:58.070894 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:29:58.070899 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-03 03:29:58.070905 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:29:58.070910 | orchestrator | 2026-02-03 03:29:58.070917 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-03 03:29:58.070922 | orchestrator | Tuesday 03 February 2026 03:29:55 +0000 (0:00:01.187) 0:00:06.771 ****** 2026-02-03 03:29:58.070928 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:29:58.070934 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:29:58.070939 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:29:58.070945 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:29:58.070951 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:29:58.070956 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:29:58.070962 | orchestrator | 2026-02-03 03:29:58.070967 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-03 03:29:58.070981 | orchestrator | Tuesday 03 February 2026 03:29:56 +0000 (0:00:00.779) 0:00:07.550 ****** 2026-02-03 03:29:58.071003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:29:58.071014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:29:58.071021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:29:58.071084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:29:58.071098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:29:58.071110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:00.525947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526213 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526297 | orchestrator | 2026-02-03 03:30:00.526320 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-03 03:30:00.526339 | orchestrator | Tuesday 03 February 2026 03:29:58 +0000 (0:00:01.458) 0:00:09.008 ****** 2026-02-03 03:30:00.526358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:00.526474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:03.348874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:03.348981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:03.348999 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:03.349029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:03.349062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:03.349093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:03.349105 | orchestrator | 2026-02-03 03:30:03.349117 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-03 03:30:03.349130 | orchestrator | Tuesday 03 February 2026 03:30:00 +0000 (0:00:02.439) 0:00:11.448 ****** 2026-02-03 03:30:03.349139 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:30:03.349152 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:30:03.349162 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:30:03.349171 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:30:03.349181 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:30:03.349192 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:30:03.349199 | orchestrator | 2026-02-03 03:30:03.349207 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-03 03:30:03.349213 | orchestrator | Tuesday 03 February 2026 03:30:01 +0000 (0:00:01.009) 0:00:12.457 ****** 2026-02-03 03:30:03.349220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:03.349228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:03.349246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:03.349256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:03.349275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:28.814410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 03:30:28.814526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:28.814541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:28.814651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:28.814669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:28.814701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:28.814714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 03:30:28.814726 | orchestrator | 2026-02-03 03:30:28.814740 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 03:30:28.814754 | orchestrator | Tuesday 03 February 2026 03:30:03 +0000 (0:00:01.876) 0:00:14.334 ****** 2026-02-03 03:30:28.814765 | orchestrator | 2026-02-03 03:30:28.814777 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 03:30:28.814788 | orchestrator | Tuesday 03 February 2026 03:30:03 +0000 (0:00:00.422) 0:00:14.756 ****** 2026-02-03 03:30:28.814813 | orchestrator | 2026-02-03 03:30:28.814824 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 03:30:28.814835 | orchestrator | Tuesday 03 February 2026 03:30:04 +0000 (0:00:00.148) 0:00:14.905 ****** 2026-02-03 03:30:28.814846 | orchestrator | 2026-02-03 03:30:28.814857 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 03:30:28.814868 | orchestrator | Tuesday 03 February 2026 03:30:04 +0000 (0:00:00.141) 0:00:15.046 ****** 2026-02-03 03:30:28.814879 | orchestrator | 2026-02-03 03:30:28.814890 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 03:30:28.814901 | orchestrator | Tuesday 03 February 2026 03:30:04 +0000 (0:00:00.131) 0:00:15.178 ****** 2026-02-03 03:30:28.814912 | orchestrator | 2026-02-03 03:30:28.814924 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 03:30:28.814935 | orchestrator | Tuesday 03 February 2026 03:30:04 +0000 (0:00:00.130) 0:00:15.309 ****** 2026-02-03 03:30:28.814948 | orchestrator | 2026-02-03 03:30:28.814960 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-03 03:30:28.814973 | orchestrator | Tuesday 03 February 2026 03:30:04 +0000 (0:00:00.130) 0:00:15.439 ****** 2026-02-03 03:30:28.814986 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:30:28.815001 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:30:28.815014 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:30:28.815027 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:30:28.815039 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:30:28.815052 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:30:28.815065 | orchestrator | 2026-02-03 03:30:28.815078 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-03 03:30:28.815092 | orchestrator | Tuesday 03 February 2026 03:30:13 +0000 (0:00:08.637) 0:00:24.076 ****** 2026-02-03 03:30:28.815106 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:30:28.815124 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:30:28.815138 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:30:28.815151 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:30:28.815165 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:30:28.815177 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:30:28.815188 | orchestrator | 2026-02-03 03:30:28.815200 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-03 03:30:28.815211 | orchestrator | Tuesday 03 February 2026 03:30:14 +0000 (0:00:01.118) 0:00:25.195 ****** 2026-02-03 03:30:28.815222 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:30:28.815233 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:30:28.815244 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:30:28.815255 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:30:28.815266 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:30:28.815277 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:30:28.815288 | orchestrator | 2026-02-03 03:30:28.815299 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-03 03:30:28.815311 | orchestrator | Tuesday 03 February 2026 03:30:22 +0000 (0:00:08.184) 0:00:33.379 ****** 2026-02-03 03:30:28.815322 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-03 03:30:28.815333 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-03 03:30:28.815344 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-03 03:30:28.815355 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-03 03:30:28.815367 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-03 03:30:28.815378 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-03 03:30:28.815405 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-03 03:30:28.815443 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-03 03:30:42.319502 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-03 03:30:42.319654 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-03 03:30:42.319673 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-03 03:30:42.319686 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-03 03:30:42.319699 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 03:30:42.319713 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 03:30:42.319725 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 03:30:42.319738 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 03:30:42.319748 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 03:30:42.319755 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 03:30:42.319763 | orchestrator | 2026-02-03 03:30:42.319772 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-03 03:30:42.319781 | orchestrator | Tuesday 03 February 2026 03:30:28 +0000 (0:00:06.262) 0:00:39.641 ****** 2026-02-03 03:30:42.319790 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-03 03:30:42.319798 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:30:42.319807 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-03 03:30:42.319814 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:30:42.319821 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-03 03:30:42.319828 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:30:42.319836 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-03 03:30:42.319843 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-03 03:30:42.319850 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-03 03:30:42.319858 | orchestrator | 2026-02-03 03:30:42.319865 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-03 03:30:42.319873 | orchestrator | Tuesday 03 February 2026 03:30:31 +0000 (0:00:02.404) 0:00:42.046 ****** 2026-02-03 03:30:42.319880 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-03 03:30:42.319887 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:30:42.319895 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-03 03:30:42.319902 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:30:42.319909 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-03 03:30:42.319916 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:30:42.319923 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-03 03:30:42.319931 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-03 03:30:42.319953 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-03 03:30:42.319960 | orchestrator | 2026-02-03 03:30:42.319968 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-03 03:30:42.319975 | orchestrator | Tuesday 03 February 2026 03:30:34 +0000 (0:00:03.380) 0:00:45.427 ****** 2026-02-03 03:30:42.319982 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:30:42.319990 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:30:42.320013 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:30:42.320021 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:30:42.320028 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:30:42.320035 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:30:42.320043 | orchestrator | 2026-02-03 03:30:42.320053 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:30:42.320062 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 03:30:42.320072 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 03:30:42.320081 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 03:30:42.320090 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 03:30:42.320098 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 03:30:42.320107 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 03:30:42.320116 | orchestrator | 2026-02-03 03:30:42.320125 | orchestrator | 2026-02-03 03:30:42.320134 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:30:42.320142 | orchestrator | Tuesday 03 February 2026 03:30:41 +0000 (0:00:07.263) 0:00:52.690 ****** 2026-02-03 03:30:42.320166 | orchestrator | =============================================================================== 2026-02-03 03:30:42.320176 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.45s 2026-02-03 03:30:42.320185 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.64s 2026-02-03 03:30:42.320193 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.26s 2026-02-03 03:30:42.320202 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.38s 2026-02-03 03:30:42.320210 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.44s 2026-02-03 03:30:42.320218 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.40s 2026-02-03 03:30:42.320227 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.88s 2026-02-03 03:30:42.320235 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.53s 2026-02-03 03:30:42.320243 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.46s 2026-02-03 03:30:42.320252 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.19s 2026-02-03 03:30:42.320260 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.18s 2026-02-03 03:30:42.320269 | orchestrator | module-load : Load modules ---------------------------------------------- 1.17s 2026-02-03 03:30:42.320277 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.12s 2026-02-03 03:30:42.320286 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.11s 2026-02-03 03:30:42.320294 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.01s 2026-02-03 03:30:42.320302 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.78s 2026-02-03 03:30:42.320311 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2026-02-03 03:30:42.320320 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-02-03 03:30:44.863707 | orchestrator | 2026-02-03 03:30:44 | INFO  | Task eb827ba8-91ce-435f-9c64-13dd34c85bd3 (ovn) was prepared for execution. 2026-02-03 03:30:44.863796 | orchestrator | 2026-02-03 03:30:44 | INFO  | It takes a moment until task eb827ba8-91ce-435f-9c64-13dd34c85bd3 (ovn) has been started and output is visible here. 2026-02-03 03:30:55.899586 | orchestrator | 2026-02-03 03:30:55.899697 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:30:55.899712 | orchestrator | 2026-02-03 03:30:55.899721 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:30:55.899730 | orchestrator | Tuesday 03 February 2026 03:30:49 +0000 (0:00:00.166) 0:00:00.166 ****** 2026-02-03 03:30:55.899737 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:30:55.899746 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:30:55.899753 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:30:55.899761 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:30:55.899769 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:30:55.899777 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:30:55.899785 | orchestrator | 2026-02-03 03:30:55.899793 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:30:55.899801 | orchestrator | Tuesday 03 February 2026 03:30:49 +0000 (0:00:00.695) 0:00:00.862 ****** 2026-02-03 03:30:55.899826 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-03 03:30:55.899834 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-03 03:30:55.899841 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-03 03:30:55.899849 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-03 03:30:55.899856 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-03 03:30:55.899864 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-03 03:30:55.899871 | orchestrator | 2026-02-03 03:30:55.899880 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-03 03:30:55.899888 | orchestrator | 2026-02-03 03:30:55.899896 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-03 03:30:55.899904 | orchestrator | Tuesday 03 February 2026 03:30:50 +0000 (0:00:00.839) 0:00:01.701 ****** 2026-02-03 03:30:55.899913 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:30:55.899922 | orchestrator | 2026-02-03 03:30:55.899930 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-03 03:30:55.899938 | orchestrator | Tuesday 03 February 2026 03:30:51 +0000 (0:00:01.188) 0:00:02.890 ****** 2026-02-03 03:30:55.899948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.899959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.899967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.899976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900043 | orchestrator | 2026-02-03 03:30:55.900051 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-03 03:30:55.900058 | orchestrator | Tuesday 03 February 2026 03:30:53 +0000 (0:00:01.322) 0:00:04.212 ****** 2026-02-03 03:30:55.900071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900125 | orchestrator | 2026-02-03 03:30:55.900134 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-03 03:30:55.900141 | orchestrator | Tuesday 03 February 2026 03:30:54 +0000 (0:00:01.629) 0:00:05.841 ****** 2026-02-03 03:30:55.900150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:30:55.900173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331620 | orchestrator | 2026-02-03 03:31:20.331629 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-03 03:31:20.331637 | orchestrator | Tuesday 03 February 2026 03:30:55 +0000 (0:00:01.167) 0:00:07.009 ****** 2026-02-03 03:31:20.331644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331722 | orchestrator | 2026-02-03 03:31:20.331729 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-03 03:31:20.331736 | orchestrator | Tuesday 03 February 2026 03:30:57 +0000 (0:00:01.562) 0:00:08.571 ****** 2026-02-03 03:31:20.331749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:31:20.331806 | orchestrator | 2026-02-03 03:31:20.331816 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-03 03:31:20.331826 | orchestrator | Tuesday 03 February 2026 03:30:58 +0000 (0:00:01.328) 0:00:09.900 ****** 2026-02-03 03:31:20.331836 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:31:20.331848 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:31:20.331858 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:31:20.331869 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:31:20.331879 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:31:20.331889 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:31:20.331899 | orchestrator | 2026-02-03 03:31:20.331909 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-03 03:31:20.331919 | orchestrator | Tuesday 03 February 2026 03:31:01 +0000 (0:00:02.454) 0:00:12.355 ****** 2026-02-03 03:31:20.331930 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-03 03:31:20.331941 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-03 03:31:20.331952 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-03 03:31:20.331962 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-03 03:31:20.331973 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-03 03:31:20.331983 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-03 03:31:20.332001 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 03:31:55.961524 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 03:31:55.961670 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 03:31:55.961719 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 03:31:55.961742 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 03:31:55.961762 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 03:31:55.961783 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-03 03:31:55.961804 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-03 03:31:55.961860 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-03 03:31:55.961883 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-03 03:31:55.961902 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-03 03:31:55.961922 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-03 03:31:55.961942 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 03:31:55.961964 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 03:31:55.961984 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 03:31:55.962004 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 03:31:55.962097 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 03:31:55.962121 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 03:31:55.962142 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 03:31:55.962160 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 03:31:55.962180 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 03:31:55.962199 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 03:31:55.962219 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 03:31:55.962237 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 03:31:55.962255 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 03:31:55.962273 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 03:31:55.962291 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 03:31:55.962308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 03:31:55.962328 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 03:31:55.962346 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 03:31:55.962364 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-03 03:31:55.962383 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-03 03:31:55.962429 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-03 03:31:55.962448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-03 03:31:55.962490 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-03 03:31:55.962512 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-03 03:31:55.962531 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-03 03:31:55.962593 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-03 03:31:55.962614 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-03 03:31:55.962642 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-03 03:31:55.962661 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-03 03:31:55.962680 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-03 03:31:55.962698 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-03 03:31:55.962717 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-03 03:31:55.962735 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-03 03:31:55.962755 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-03 03:31:55.962774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-03 03:31:55.962792 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-03 03:31:55.962811 | orchestrator | 2026-02-03 03:31:55.962831 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 03:31:55.962850 | orchestrator | Tuesday 03 February 2026 03:31:19 +0000 (0:00:18.480) 0:00:30.835 ****** 2026-02-03 03:31:55.962869 | orchestrator | 2026-02-03 03:31:55.962890 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 03:31:55.962909 | orchestrator | Tuesday 03 February 2026 03:31:19 +0000 (0:00:00.256) 0:00:31.092 ****** 2026-02-03 03:31:55.962927 | orchestrator | 2026-02-03 03:31:55.962946 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 03:31:55.962965 | orchestrator | Tuesday 03 February 2026 03:31:20 +0000 (0:00:00.066) 0:00:31.158 ****** 2026-02-03 03:31:55.962984 | orchestrator | 2026-02-03 03:31:55.963003 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 03:31:55.963021 | orchestrator | Tuesday 03 February 2026 03:31:20 +0000 (0:00:00.066) 0:00:31.225 ****** 2026-02-03 03:31:55.963039 | orchestrator | 2026-02-03 03:31:55.963058 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 03:31:55.963078 | orchestrator | Tuesday 03 February 2026 03:31:20 +0000 (0:00:00.079) 0:00:31.304 ****** 2026-02-03 03:31:55.963096 | orchestrator | 2026-02-03 03:31:55.963115 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 03:31:55.963133 | orchestrator | Tuesday 03 February 2026 03:31:20 +0000 (0:00:00.064) 0:00:31.368 ****** 2026-02-03 03:31:55.963151 | orchestrator | 2026-02-03 03:31:55.963170 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-03 03:31:55.963190 | orchestrator | Tuesday 03 February 2026 03:31:20 +0000 (0:00:00.063) 0:00:31.432 ****** 2026-02-03 03:31:55.963209 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:31:55.963228 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:31:55.963245 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:31:55.963264 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:31:55.963283 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:31:55.963302 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:31:55.963320 | orchestrator | 2026-02-03 03:31:55.963338 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-03 03:31:55.963357 | orchestrator | Tuesday 03 February 2026 03:31:21 +0000 (0:00:01.607) 0:00:33.040 ****** 2026-02-03 03:31:55.963388 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:31:55.963460 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:31:55.963479 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:31:55.963497 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:31:55.963515 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:31:55.963535 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:31:55.963552 | orchestrator | 2026-02-03 03:31:55.963571 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-03 03:31:55.963589 | orchestrator | 2026-02-03 03:31:55.963608 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-03 03:31:55.963625 | orchestrator | Tuesday 03 February 2026 03:31:53 +0000 (0:00:31.694) 0:01:04.734 ****** 2026-02-03 03:31:55.963644 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:31:55.963664 | orchestrator | 2026-02-03 03:31:55.963682 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-03 03:31:55.963700 | orchestrator | Tuesday 03 February 2026 03:31:54 +0000 (0:00:00.786) 0:01:05.521 ****** 2026-02-03 03:31:55.963711 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:31:55.963722 | orchestrator | 2026-02-03 03:31:55.963733 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-03 03:31:55.963749 | orchestrator | Tuesday 03 February 2026 03:31:54 +0000 (0:00:00.580) 0:01:06.101 ****** 2026-02-03 03:31:55.963768 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:31:55.963786 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:31:55.963804 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:31:55.963823 | orchestrator | 2026-02-03 03:31:55.963841 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-03 03:31:55.963873 | orchestrator | Tuesday 03 February 2026 03:31:55 +0000 (0:00:00.963) 0:01:07.064 ****** 2026-02-03 03:32:07.525075 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:07.525192 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:07.525208 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:07.525220 | orchestrator | 2026-02-03 03:32:07.525233 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-03 03:32:07.525263 | orchestrator | Tuesday 03 February 2026 03:31:56 +0000 (0:00:00.355) 0:01:07.420 ****** 2026-02-03 03:32:07.525274 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:07.525286 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:07.525297 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:07.525308 | orchestrator | 2026-02-03 03:32:07.525319 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-03 03:32:07.525331 | orchestrator | Tuesday 03 February 2026 03:31:56 +0000 (0:00:00.332) 0:01:07.753 ****** 2026-02-03 03:32:07.525342 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:07.525353 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:07.525364 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:07.525446 | orchestrator | 2026-02-03 03:32:07.525460 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-03 03:32:07.525471 | orchestrator | Tuesday 03 February 2026 03:31:56 +0000 (0:00:00.368) 0:01:08.121 ****** 2026-02-03 03:32:07.525482 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:07.525493 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:07.525504 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:07.525515 | orchestrator | 2026-02-03 03:32:07.525526 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-03 03:32:07.525538 | orchestrator | Tuesday 03 February 2026 03:31:57 +0000 (0:00:00.544) 0:01:08.666 ****** 2026-02-03 03:32:07.525549 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.525561 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.525572 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.525583 | orchestrator | 2026-02-03 03:32:07.525595 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-03 03:32:07.525629 | orchestrator | Tuesday 03 February 2026 03:31:57 +0000 (0:00:00.335) 0:01:09.001 ****** 2026-02-03 03:32:07.525644 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.525657 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.525670 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.525683 | orchestrator | 2026-02-03 03:32:07.525697 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-03 03:32:07.525716 | orchestrator | Tuesday 03 February 2026 03:31:58 +0000 (0:00:00.303) 0:01:09.305 ****** 2026-02-03 03:32:07.525735 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.525755 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.525783 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.525806 | orchestrator | 2026-02-03 03:32:07.525824 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-03 03:32:07.525843 | orchestrator | Tuesday 03 February 2026 03:31:58 +0000 (0:00:00.296) 0:01:09.601 ****** 2026-02-03 03:32:07.525862 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.525880 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.525899 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.525917 | orchestrator | 2026-02-03 03:32:07.525936 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-03 03:32:07.525956 | orchestrator | Tuesday 03 February 2026 03:31:58 +0000 (0:00:00.290) 0:01:09.892 ****** 2026-02-03 03:32:07.525975 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.525994 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526014 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.526109 | orchestrator | 2026-02-03 03:32:07.526129 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-03 03:32:07.526147 | orchestrator | Tuesday 03 February 2026 03:31:59 +0000 (0:00:00.525) 0:01:10.417 ****** 2026-02-03 03:32:07.526166 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.526184 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526202 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.526220 | orchestrator | 2026-02-03 03:32:07.526240 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-03 03:32:07.526254 | orchestrator | Tuesday 03 February 2026 03:31:59 +0000 (0:00:00.319) 0:01:10.737 ****** 2026-02-03 03:32:07.526265 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.526276 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526287 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.526298 | orchestrator | 2026-02-03 03:32:07.526309 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-03 03:32:07.526320 | orchestrator | Tuesday 03 February 2026 03:31:59 +0000 (0:00:00.329) 0:01:11.066 ****** 2026-02-03 03:32:07.526331 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.526342 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526352 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.526363 | orchestrator | 2026-02-03 03:32:07.526403 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-03 03:32:07.526418 | orchestrator | Tuesday 03 February 2026 03:32:00 +0000 (0:00:00.358) 0:01:11.425 ****** 2026-02-03 03:32:07.526429 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.526440 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526451 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.526462 | orchestrator | 2026-02-03 03:32:07.526473 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-03 03:32:07.526484 | orchestrator | Tuesday 03 February 2026 03:32:00 +0000 (0:00:00.541) 0:01:11.967 ****** 2026-02-03 03:32:07.526495 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.526506 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526517 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.526528 | orchestrator | 2026-02-03 03:32:07.526539 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-03 03:32:07.526563 | orchestrator | Tuesday 03 February 2026 03:32:01 +0000 (0:00:00.296) 0:01:12.263 ****** 2026-02-03 03:32:07.526574 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.526585 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526595 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.526606 | orchestrator | 2026-02-03 03:32:07.526617 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-03 03:32:07.526628 | orchestrator | Tuesday 03 February 2026 03:32:01 +0000 (0:00:00.297) 0:01:12.561 ****** 2026-02-03 03:32:07.526661 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.526673 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526684 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.526695 | orchestrator | 2026-02-03 03:32:07.526706 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-03 03:32:07.526725 | orchestrator | Tuesday 03 February 2026 03:32:01 +0000 (0:00:00.315) 0:01:12.876 ****** 2026-02-03 03:32:07.526737 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:32:07.526748 | orchestrator | 2026-02-03 03:32:07.526760 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-03 03:32:07.526770 | orchestrator | Tuesday 03 February 2026 03:32:02 +0000 (0:00:00.797) 0:01:13.673 ****** 2026-02-03 03:32:07.526781 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:07.526792 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:07.526803 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:07.526814 | orchestrator | 2026-02-03 03:32:07.526824 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-03 03:32:07.526835 | orchestrator | Tuesday 03 February 2026 03:32:03 +0000 (0:00:00.458) 0:01:14.132 ****** 2026-02-03 03:32:07.526846 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:07.526857 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:07.526868 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:07.526879 | orchestrator | 2026-02-03 03:32:07.526889 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-03 03:32:07.526900 | orchestrator | Tuesday 03 February 2026 03:32:03 +0000 (0:00:00.452) 0:01:14.585 ****** 2026-02-03 03:32:07.526911 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.526922 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526933 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.526944 | orchestrator | 2026-02-03 03:32:07.526955 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-03 03:32:07.526966 | orchestrator | Tuesday 03 February 2026 03:32:03 +0000 (0:00:00.334) 0:01:14.919 ****** 2026-02-03 03:32:07.526977 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.526988 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.526998 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.527009 | orchestrator | 2026-02-03 03:32:07.527020 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-03 03:32:07.527031 | orchestrator | Tuesday 03 February 2026 03:32:04 +0000 (0:00:00.571) 0:01:15.491 ****** 2026-02-03 03:32:07.527042 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.527053 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.527064 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.527074 | orchestrator | 2026-02-03 03:32:07.527085 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-03 03:32:07.527096 | orchestrator | Tuesday 03 February 2026 03:32:04 +0000 (0:00:00.364) 0:01:15.855 ****** 2026-02-03 03:32:07.527107 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.527118 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.527129 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.527140 | orchestrator | 2026-02-03 03:32:07.527151 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-03 03:32:07.527162 | orchestrator | Tuesday 03 February 2026 03:32:05 +0000 (0:00:00.322) 0:01:16.177 ****** 2026-02-03 03:32:07.527184 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.527195 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.527206 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.527217 | orchestrator | 2026-02-03 03:32:07.527228 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-03 03:32:07.527239 | orchestrator | Tuesday 03 February 2026 03:32:05 +0000 (0:00:00.402) 0:01:16.579 ****** 2026-02-03 03:32:07.527250 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:07.527261 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:07.527271 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:07.527282 | orchestrator | 2026-02-03 03:32:07.527293 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-03 03:32:07.527304 | orchestrator | Tuesday 03 February 2026 03:32:05 +0000 (0:00:00.535) 0:01:17.114 ****** 2026-02-03 03:32:07.527318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:07.527332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:07.527344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:07.527368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.974769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.974893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.974921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.974942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.974994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975018 | orchestrator | 2026-02-03 03:32:13.975040 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-03 03:32:13.975097 | orchestrator | Tuesday 03 February 2026 03:32:07 +0000 (0:00:01.516) 0:01:18.631 ****** 2026-02-03 03:32:13.975121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975286 | orchestrator | 2026-02-03 03:32:13.975300 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-03 03:32:13.975315 | orchestrator | Tuesday 03 February 2026 03:32:11 +0000 (0:00:03.942) 0:01:22.574 ****** 2026-02-03 03:32:13.975329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:13.975487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.331830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.331992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332016 | orchestrator | 2026-02-03 03:32:38.332027 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-03 03:32:38.332038 | orchestrator | Tuesday 03 February 2026 03:32:13 +0000 (0:00:02.075) 0:01:24.650 ****** 2026-02-03 03:32:38.332047 | orchestrator | 2026-02-03 03:32:38.332057 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-03 03:32:38.332066 | orchestrator | Tuesday 03 February 2026 03:32:13 +0000 (0:00:00.066) 0:01:24.716 ****** 2026-02-03 03:32:38.332075 | orchestrator | 2026-02-03 03:32:38.332084 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-03 03:32:38.332093 | orchestrator | Tuesday 03 February 2026 03:32:13 +0000 (0:00:00.068) 0:01:24.785 ****** 2026-02-03 03:32:38.332102 | orchestrator | 2026-02-03 03:32:38.332111 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-03 03:32:38.332120 | orchestrator | Tuesday 03 February 2026 03:32:13 +0000 (0:00:00.295) 0:01:25.080 ****** 2026-02-03 03:32:38.332129 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:32:38.332140 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:32:38.332149 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:32:38.332158 | orchestrator | 2026-02-03 03:32:38.332167 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-03 03:32:38.332176 | orchestrator | Tuesday 03 February 2026 03:32:21 +0000 (0:00:07.438) 0:01:32.519 ****** 2026-02-03 03:32:38.332185 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:32:38.332194 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:32:38.332203 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:32:38.332212 | orchestrator | 2026-02-03 03:32:38.332221 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-03 03:32:38.332230 | orchestrator | Tuesday 03 February 2026 03:32:28 +0000 (0:00:07.560) 0:01:40.080 ****** 2026-02-03 03:32:38.332239 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:32:38.332248 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:32:38.332257 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:32:38.332266 | orchestrator | 2026-02-03 03:32:38.332275 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-03 03:32:38.332284 | orchestrator | Tuesday 03 February 2026 03:32:31 +0000 (0:00:02.492) 0:01:42.572 ****** 2026-02-03 03:32:38.332293 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:32:38.332302 | orchestrator | 2026-02-03 03:32:38.332337 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-03 03:32:38.332348 | orchestrator | Tuesday 03 February 2026 03:32:31 +0000 (0:00:00.141) 0:01:42.714 ****** 2026-02-03 03:32:38.332357 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:38.332367 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:38.332376 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:38.332385 | orchestrator | 2026-02-03 03:32:38.332394 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-03 03:32:38.332403 | orchestrator | Tuesday 03 February 2026 03:32:32 +0000 (0:00:01.038) 0:01:43.753 ****** 2026-02-03 03:32:38.332412 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:38.332434 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:38.332443 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:32:38.332452 | orchestrator | 2026-02-03 03:32:38.332461 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-03 03:32:38.332470 | orchestrator | Tuesday 03 February 2026 03:32:33 +0000 (0:00:00.634) 0:01:44.387 ****** 2026-02-03 03:32:38.332479 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:38.332488 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:38.332497 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:38.332505 | orchestrator | 2026-02-03 03:32:38.332514 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-03 03:32:38.332541 | orchestrator | Tuesday 03 February 2026 03:32:34 +0000 (0:00:00.826) 0:01:45.213 ****** 2026-02-03 03:32:38.332551 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:32:38.332560 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:32:38.332568 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:32:38.332577 | orchestrator | 2026-02-03 03:32:38.332586 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-03 03:32:38.332595 | orchestrator | Tuesday 03 February 2026 03:32:34 +0000 (0:00:00.644) 0:01:45.859 ****** 2026-02-03 03:32:38.332604 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:38.332613 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:38.332639 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:38.332649 | orchestrator | 2026-02-03 03:32:38.332658 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-03 03:32:38.332667 | orchestrator | Tuesday 03 February 2026 03:32:35 +0000 (0:00:00.797) 0:01:46.656 ****** 2026-02-03 03:32:38.332675 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:38.332684 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:38.332693 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:38.332702 | orchestrator | 2026-02-03 03:32:38.332712 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-03 03:32:38.332721 | orchestrator | Tuesday 03 February 2026 03:32:36 +0000 (0:00:01.046) 0:01:47.703 ****** 2026-02-03 03:32:38.332730 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:32:38.332739 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:32:38.332747 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:32:38.332756 | orchestrator | 2026-02-03 03:32:38.332765 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-03 03:32:38.332774 | orchestrator | Tuesday 03 February 2026 03:32:36 +0000 (0:00:00.324) 0:01:48.028 ****** 2026-02-03 03:32:38.332786 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332798 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332808 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332817 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332834 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332843 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332852 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332867 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:38.332884 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483096 | orchestrator | 2026-02-03 03:32:45.483206 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-03 03:32:45.483223 | orchestrator | Tuesday 03 February 2026 03:32:38 +0000 (0:00:01.408) 0:01:49.437 ****** 2026-02-03 03:32:45.483236 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483255 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483272 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483286 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483434 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483495 | orchestrator | 2026-02-03 03:32:45.483505 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-03 03:32:45.483515 | orchestrator | Tuesday 03 February 2026 03:32:42 +0000 (0:00:03.892) 0:01:53.330 ****** 2026-02-03 03:32:45.483545 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483556 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483567 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483598 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483699 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 03:32:45.483728 | orchestrator | 2026-02-03 03:32:45.483740 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-03 03:32:45.483751 | orchestrator | Tuesday 03 February 2026 03:32:45 +0000 (0:00:03.040) 0:01:56.371 ****** 2026-02-03 03:32:45.483762 | orchestrator | 2026-02-03 03:32:45.483774 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-03 03:32:45.483786 | orchestrator | Tuesday 03 February 2026 03:32:45 +0000 (0:00:00.063) 0:01:56.435 ****** 2026-02-03 03:32:45.483797 | orchestrator | 2026-02-03 03:32:45.483808 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-03 03:32:45.483819 | orchestrator | Tuesday 03 February 2026 03:32:45 +0000 (0:00:00.070) 0:01:56.506 ****** 2026-02-03 03:32:45.483830 | orchestrator | 2026-02-03 03:32:45.483851 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-03 03:33:10.108961 | orchestrator | Tuesday 03 February 2026 03:32:45 +0000 (0:00:00.070) 0:01:56.576 ****** 2026-02-03 03:33:10.109044 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:33:10.109052 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:33:10.109056 | orchestrator | 2026-02-03 03:33:10.109062 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-03 03:33:10.109067 | orchestrator | Tuesday 03 February 2026 03:32:51 +0000 (0:00:06.268) 0:02:02.845 ****** 2026-02-03 03:33:10.109071 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:33:10.109075 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:33:10.109079 | orchestrator | 2026-02-03 03:33:10.109083 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-03 03:33:10.109101 | orchestrator | Tuesday 03 February 2026 03:32:57 +0000 (0:00:06.216) 0:02:09.061 ****** 2026-02-03 03:33:10.109105 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:33:10.109109 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:33:10.109113 | orchestrator | 2026-02-03 03:33:10.109117 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-03 03:33:10.109121 | orchestrator | Tuesday 03 February 2026 03:33:04 +0000 (0:00:06.236) 0:02:15.297 ****** 2026-02-03 03:33:10.109124 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:33:10.109128 | orchestrator | 2026-02-03 03:33:10.109132 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-03 03:33:10.109136 | orchestrator | Tuesday 03 February 2026 03:33:04 +0000 (0:00:00.143) 0:02:15.441 ****** 2026-02-03 03:33:10.109140 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:33:10.109145 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:33:10.109148 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:33:10.109152 | orchestrator | 2026-02-03 03:33:10.109156 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-03 03:33:10.109160 | orchestrator | Tuesday 03 February 2026 03:33:05 +0000 (0:00:01.019) 0:02:16.460 ****** 2026-02-03 03:33:10.109164 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:33:10.109168 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:33:10.109171 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:33:10.109175 | orchestrator | 2026-02-03 03:33:10.109179 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-03 03:33:10.109183 | orchestrator | Tuesday 03 February 2026 03:33:05 +0000 (0:00:00.660) 0:02:17.121 ****** 2026-02-03 03:33:10.109187 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:33:10.109191 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:33:10.109194 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:33:10.109198 | orchestrator | 2026-02-03 03:33:10.109202 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-03 03:33:10.109206 | orchestrator | Tuesday 03 February 2026 03:33:06 +0000 (0:00:00.902) 0:02:18.023 ****** 2026-02-03 03:33:10.109210 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:33:10.109214 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:33:10.109217 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:33:10.109221 | orchestrator | 2026-02-03 03:33:10.109225 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-03 03:33:10.109229 | orchestrator | Tuesday 03 February 2026 03:33:07 +0000 (0:00:00.634) 0:02:18.657 ****** 2026-02-03 03:33:10.109233 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:33:10.109236 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:33:10.109240 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:33:10.109244 | orchestrator | 2026-02-03 03:33:10.109248 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-03 03:33:10.109251 | orchestrator | Tuesday 03 February 2026 03:33:08 +0000 (0:00:01.245) 0:02:19.902 ****** 2026-02-03 03:33:10.109298 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:33:10.109302 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:33:10.109305 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:33:10.109309 | orchestrator | 2026-02-03 03:33:10.109313 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:33:10.109318 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-03 03:33:10.109323 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-03 03:33:10.109327 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-03 03:33:10.109331 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:33:10.109339 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:33:10.109343 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:33:10.109347 | orchestrator | 2026-02-03 03:33:10.109351 | orchestrator | 2026-02-03 03:33:10.109365 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:33:10.109369 | orchestrator | Tuesday 03 February 2026 03:33:09 +0000 (0:00:00.905) 0:02:20.808 ****** 2026-02-03 03:33:10.109372 | orchestrator | =============================================================================== 2026-02-03 03:33:10.109376 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 31.69s 2026-02-03 03:33:10.109380 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.48s 2026-02-03 03:33:10.109384 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.78s 2026-02-03 03:33:10.109387 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.71s 2026-02-03 03:33:10.109391 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.73s 2026-02-03 03:33:10.109404 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.94s 2026-02-03 03:33:10.109408 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.89s 2026-02-03 03:33:10.109411 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.04s 2026-02-03 03:33:10.109415 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.45s 2026-02-03 03:33:10.109419 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.08s 2026-02-03 03:33:10.109423 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.63s 2026-02-03 03:33:10.109426 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.61s 2026-02-03 03:33:10.109430 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.56s 2026-02-03 03:33:10.109434 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.52s 2026-02-03 03:33:10.109438 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2026-02-03 03:33:10.109441 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.33s 2026-02-03 03:33:10.109445 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.32s 2026-02-03 03:33:10.109449 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.25s 2026-02-03 03:33:10.109453 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.19s 2026-02-03 03:33:10.109457 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.17s 2026-02-03 03:33:10.456802 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-03 03:33:10.456871 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-03 03:33:12.686620 | orchestrator | 2026-02-03 03:33:12 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-03 03:33:22.864670 | orchestrator | 2026-02-03 03:33:22 | INFO  | Task bd3d9d40-2eff-4ed6-932f-306b0caa5bfd (wipe-partitions) was prepared for execution. 2026-02-03 03:33:22.864796 | orchestrator | 2026-02-03 03:33:22 | INFO  | It takes a moment until task bd3d9d40-2eff-4ed6-932f-306b0caa5bfd (wipe-partitions) has been started and output is visible here. 2026-02-03 03:33:37.131886 | orchestrator | 2026-02-03 03:33:37.131980 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-03 03:33:37.131996 | orchestrator | 2026-02-03 03:33:37.132005 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-03 03:33:37.132016 | orchestrator | Tuesday 03 February 2026 03:33:27 +0000 (0:00:00.133) 0:00:00.133 ****** 2026-02-03 03:33:37.132047 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:33:37.132059 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:33:37.132068 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:33:37.132077 | orchestrator | 2026-02-03 03:33:37.132088 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-03 03:33:37.132098 | orchestrator | Tuesday 03 February 2026 03:33:27 +0000 (0:00:00.607) 0:00:00.740 ****** 2026-02-03 03:33:37.132108 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:33:37.132117 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:33:37.132126 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:33:37.132134 | orchestrator | 2026-02-03 03:33:37.132143 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-03 03:33:37.132152 | orchestrator | Tuesday 03 February 2026 03:33:28 +0000 (0:00:00.428) 0:00:01.168 ****** 2026-02-03 03:33:37.132162 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:33:37.132170 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:33:37.132179 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:33:37.132188 | orchestrator | 2026-02-03 03:33:37.132197 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-03 03:33:37.132248 | orchestrator | Tuesday 03 February 2026 03:33:28 +0000 (0:00:00.585) 0:00:01.754 ****** 2026-02-03 03:33:37.132260 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:33:37.132269 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:33:37.132279 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:33:37.132288 | orchestrator | 2026-02-03 03:33:37.132298 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-03 03:33:37.132307 | orchestrator | Tuesday 03 February 2026 03:33:29 +0000 (0:00:00.301) 0:00:02.056 ****** 2026-02-03 03:33:37.132316 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-03 03:33:37.132325 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-03 03:33:37.132334 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-03 03:33:37.132342 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-03 03:33:37.132351 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-03 03:33:37.132360 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-03 03:33:37.132385 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-03 03:33:37.132394 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-03 03:33:37.132402 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-03 03:33:37.132411 | orchestrator | 2026-02-03 03:33:37.132420 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-03 03:33:37.132430 | orchestrator | Tuesday 03 February 2026 03:33:31 +0000 (0:00:02.303) 0:00:04.360 ****** 2026-02-03 03:33:37.132439 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-03 03:33:37.132449 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-03 03:33:37.132459 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-03 03:33:37.132468 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-03 03:33:37.132477 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-03 03:33:37.132487 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-03 03:33:37.132497 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-03 03:33:37.132507 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-03 03:33:37.132516 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-03 03:33:37.132525 | orchestrator | 2026-02-03 03:33:37.132533 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-03 03:33:37.132542 | orchestrator | Tuesday 03 February 2026 03:33:33 +0000 (0:00:01.599) 0:00:05.959 ****** 2026-02-03 03:33:37.132551 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-03 03:33:37.132559 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-03 03:33:37.132568 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-03 03:33:37.132577 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-03 03:33:37.132601 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-03 03:33:37.132611 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-03 03:33:37.132622 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-03 03:33:37.132630 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-03 03:33:37.132638 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-03 03:33:37.132647 | orchestrator | 2026-02-03 03:33:37.132656 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-03 03:33:37.132665 | orchestrator | Tuesday 03 February 2026 03:33:35 +0000 (0:00:02.196) 0:00:08.155 ****** 2026-02-03 03:33:37.132673 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:33:37.132682 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:33:37.132691 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:33:37.132700 | orchestrator | 2026-02-03 03:33:37.132709 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-03 03:33:37.132717 | orchestrator | Tuesday 03 February 2026 03:33:35 +0000 (0:00:00.626) 0:00:08.782 ****** 2026-02-03 03:33:37.132726 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:33:37.132735 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:33:37.132744 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:33:37.132753 | orchestrator | 2026-02-03 03:33:37.132762 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:33:37.132773 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:33:37.132784 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:33:37.132815 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:33:37.132824 | orchestrator | 2026-02-03 03:33:37.132832 | orchestrator | 2026-02-03 03:33:37.132840 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:33:37.132849 | orchestrator | Tuesday 03 February 2026 03:33:36 +0000 (0:00:00.692) 0:00:09.475 ****** 2026-02-03 03:33:37.132858 | orchestrator | =============================================================================== 2026-02-03 03:33:37.132866 | orchestrator | Check device availability ----------------------------------------------- 2.30s 2026-02-03 03:33:37.132875 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.20s 2026-02-03 03:33:37.132884 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.60s 2026-02-03 03:33:37.132892 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2026-02-03 03:33:37.132902 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-02-03 03:33:37.132910 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2026-02-03 03:33:37.132919 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-02-03 03:33:37.132928 | orchestrator | Remove all rook related logical devices --------------------------------- 0.43s 2026-02-03 03:33:37.132936 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2026-02-03 03:33:49.812291 | orchestrator | 2026-02-03 03:33:49 | INFO  | Task e4d4bfbc-6ba7-408f-bea3-0ae914630c4b (facts) was prepared for execution. 2026-02-03 03:33:49.812396 | orchestrator | 2026-02-03 03:33:49 | INFO  | It takes a moment until task e4d4bfbc-6ba7-408f-bea3-0ae914630c4b (facts) has been started and output is visible here. 2026-02-03 03:34:03.848674 | orchestrator | 2026-02-03 03:34:03.848796 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-03 03:34:03.848815 | orchestrator | 2026-02-03 03:34:03.848829 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-03 03:34:03.848841 | orchestrator | Tuesday 03 February 2026 03:33:54 +0000 (0:00:00.271) 0:00:00.271 ****** 2026-02-03 03:34:03.848882 | orchestrator | ok: [testbed-manager] 2026-02-03 03:34:03.848896 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:34:03.848907 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:34:03.848918 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:34:03.848929 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:34:03.848940 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:34:03.848951 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:34:03.848962 | orchestrator | 2026-02-03 03:34:03.848973 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-03 03:34:03.848985 | orchestrator | Tuesday 03 February 2026 03:33:55 +0000 (0:00:01.128) 0:00:01.400 ****** 2026-02-03 03:34:03.848997 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:34:03.849009 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:34:03.849020 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:34:03.849031 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:34:03.849042 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:03.849053 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:03.849064 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:03.849075 | orchestrator | 2026-02-03 03:34:03.849087 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-03 03:34:03.849098 | orchestrator | 2026-02-03 03:34:03.849109 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-03 03:34:03.849120 | orchestrator | Tuesday 03 February 2026 03:33:56 +0000 (0:00:01.328) 0:00:02.728 ****** 2026-02-03 03:34:03.849131 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:34:03.849142 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:34:03.849153 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:34:03.849189 | orchestrator | ok: [testbed-manager] 2026-02-03 03:34:03.849203 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:34:03.849217 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:34:03.849229 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:34:03.849242 | orchestrator | 2026-02-03 03:34:03.849255 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-03 03:34:03.849268 | orchestrator | 2026-02-03 03:34:03.849282 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-03 03:34:03.849296 | orchestrator | Tuesday 03 February 2026 03:34:02 +0000 (0:00:06.156) 0:00:08.884 ****** 2026-02-03 03:34:03.849309 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:34:03.849322 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:34:03.849336 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:34:03.849350 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:34:03.849362 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:03.849375 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:03.849389 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:03.849402 | orchestrator | 2026-02-03 03:34:03.849415 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:34:03.849429 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:34:03.849530 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:34:03.849552 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:34:03.849566 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:34:03.849580 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:34:03.849593 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:34:03.849613 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:34:03.849624 | orchestrator | 2026-02-03 03:34:03.849636 | orchestrator | 2026-02-03 03:34:03.849647 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:34:03.849659 | orchestrator | Tuesday 03 February 2026 03:34:03 +0000 (0:00:00.577) 0:00:09.462 ****** 2026-02-03 03:34:03.849670 | orchestrator | =============================================================================== 2026-02-03 03:34:03.849681 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.16s 2026-02-03 03:34:03.849691 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-02-03 03:34:03.849702 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-02-03 03:34:03.849713 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-02-03 03:34:06.413598 | orchestrator | 2026-02-03 03:34:06 | INFO  | Task 132d7b19-e6be-4ffc-85e9-df7ba27348ed (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-03 03:34:06.413699 | orchestrator | 2026-02-03 03:34:06 | INFO  | It takes a moment until task 132d7b19-e6be-4ffc-85e9-df7ba27348ed (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-03 03:34:19.036984 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-03 03:34:19.037108 | orchestrator | 2.16.14 2026-02-03 03:34:19.037126 | orchestrator | 2026-02-03 03:34:19.037181 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-03 03:34:19.037195 | orchestrator | 2026-02-03 03:34:19.037207 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-03 03:34:19.037219 | orchestrator | Tuesday 03 February 2026 03:34:11 +0000 (0:00:00.369) 0:00:00.369 ****** 2026-02-03 03:34:19.037231 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 03:34:19.037243 | orchestrator | 2026-02-03 03:34:19.037272 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-03 03:34:19.037283 | orchestrator | Tuesday 03 February 2026 03:34:11 +0000 (0:00:00.279) 0:00:00.649 ****** 2026-02-03 03:34:19.037294 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:34:19.037306 | orchestrator | 2026-02-03 03:34:19.037317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.037329 | orchestrator | Tuesday 03 February 2026 03:34:11 +0000 (0:00:00.240) 0:00:00.890 ****** 2026-02-03 03:34:19.037340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-03 03:34:19.037351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-03 03:34:19.037362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-03 03:34:19.037373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-03 03:34:19.037384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-03 03:34:19.037395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-03 03:34:19.037406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-03 03:34:19.037417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-03 03:34:19.037428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-03 03:34:19.037439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-03 03:34:19.037463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-03 03:34:19.037474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-03 03:34:19.037508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-03 03:34:19.037522 | orchestrator | 2026-02-03 03:34:19.037537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.037550 | orchestrator | Tuesday 03 February 2026 03:34:12 +0000 (0:00:00.518) 0:00:01.409 ****** 2026-02-03 03:34:19.037564 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.037578 | orchestrator | 2026-02-03 03:34:19.037591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.037604 | orchestrator | Tuesday 03 February 2026 03:34:12 +0000 (0:00:00.217) 0:00:01.626 ****** 2026-02-03 03:34:19.037617 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.037630 | orchestrator | 2026-02-03 03:34:19.037642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.037655 | orchestrator | Tuesday 03 February 2026 03:34:12 +0000 (0:00:00.218) 0:00:01.845 ****** 2026-02-03 03:34:19.037668 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.037681 | orchestrator | 2026-02-03 03:34:19.037694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.037707 | orchestrator | Tuesday 03 February 2026 03:34:12 +0000 (0:00:00.210) 0:00:02.056 ****** 2026-02-03 03:34:19.037720 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.037733 | orchestrator | 2026-02-03 03:34:19.037746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.037760 | orchestrator | Tuesday 03 February 2026 03:34:13 +0000 (0:00:00.211) 0:00:02.268 ****** 2026-02-03 03:34:19.037773 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.037787 | orchestrator | 2026-02-03 03:34:19.037801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.037814 | orchestrator | Tuesday 03 February 2026 03:34:13 +0000 (0:00:00.211) 0:00:02.479 ****** 2026-02-03 03:34:19.037833 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.037851 | orchestrator | 2026-02-03 03:34:19.037871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.037900 | orchestrator | Tuesday 03 February 2026 03:34:13 +0000 (0:00:00.213) 0:00:02.692 ****** 2026-02-03 03:34:19.037921 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.037939 | orchestrator | 2026-02-03 03:34:19.037958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.037976 | orchestrator | Tuesday 03 February 2026 03:34:13 +0000 (0:00:00.218) 0:00:02.911 ****** 2026-02-03 03:34:19.037996 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.038013 | orchestrator | 2026-02-03 03:34:19.038103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.038115 | orchestrator | Tuesday 03 February 2026 03:34:13 +0000 (0:00:00.212) 0:00:03.124 ****** 2026-02-03 03:34:19.038126 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371) 2026-02-03 03:34:19.038213 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371) 2026-02-03 03:34:19.038227 | orchestrator | 2026-02-03 03:34:19.038238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.038271 | orchestrator | Tuesday 03 February 2026 03:34:14 +0000 (0:00:00.415) 0:00:03.539 ****** 2026-02-03 03:34:19.038283 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f) 2026-02-03 03:34:19.038294 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f) 2026-02-03 03:34:19.038305 | orchestrator | 2026-02-03 03:34:19.038316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.038327 | orchestrator | Tuesday 03 February 2026 03:34:15 +0000 (0:00:00.710) 0:00:04.250 ****** 2026-02-03 03:34:19.038348 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e) 2026-02-03 03:34:19.038372 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e) 2026-02-03 03:34:19.038383 | orchestrator | 2026-02-03 03:34:19.038394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.038405 | orchestrator | Tuesday 03 February 2026 03:34:15 +0000 (0:00:00.710) 0:00:04.961 ****** 2026-02-03 03:34:19.038416 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3) 2026-02-03 03:34:19.038427 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3) 2026-02-03 03:34:19.038438 | orchestrator | 2026-02-03 03:34:19.038450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:19.038461 | orchestrator | Tuesday 03 February 2026 03:34:16 +0000 (0:00:00.958) 0:00:05.920 ****** 2026-02-03 03:34:19.038472 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-03 03:34:19.038483 | orchestrator | 2026-02-03 03:34:19.038494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:19.038505 | orchestrator | Tuesday 03 February 2026 03:34:17 +0000 (0:00:00.352) 0:00:06.272 ****** 2026-02-03 03:34:19.038516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-03 03:34:19.038527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-03 03:34:19.038537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-03 03:34:19.038548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-03 03:34:19.038559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-03 03:34:19.038570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-03 03:34:19.038581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-03 03:34:19.038591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-03 03:34:19.038602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-03 03:34:19.038613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-03 03:34:19.038624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-03 03:34:19.038635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-03 03:34:19.038645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-03 03:34:19.038656 | orchestrator | 2026-02-03 03:34:19.038667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:19.038678 | orchestrator | Tuesday 03 February 2026 03:34:17 +0000 (0:00:00.399) 0:00:06.672 ****** 2026-02-03 03:34:19.038689 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.038700 | orchestrator | 2026-02-03 03:34:19.038711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:19.038722 | orchestrator | Tuesday 03 February 2026 03:34:17 +0000 (0:00:00.213) 0:00:06.885 ****** 2026-02-03 03:34:19.038732 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.038743 | orchestrator | 2026-02-03 03:34:19.038754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:19.038765 | orchestrator | Tuesday 03 February 2026 03:34:17 +0000 (0:00:00.235) 0:00:07.121 ****** 2026-02-03 03:34:19.038776 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.038787 | orchestrator | 2026-02-03 03:34:19.038798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:19.038809 | orchestrator | Tuesday 03 February 2026 03:34:18 +0000 (0:00:00.224) 0:00:07.345 ****** 2026-02-03 03:34:19.038828 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.038840 | orchestrator | 2026-02-03 03:34:19.038851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:19.038862 | orchestrator | Tuesday 03 February 2026 03:34:18 +0000 (0:00:00.224) 0:00:07.570 ****** 2026-02-03 03:34:19.038873 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.038884 | orchestrator | 2026-02-03 03:34:19.038895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:19.038905 | orchestrator | Tuesday 03 February 2026 03:34:18 +0000 (0:00:00.210) 0:00:07.781 ****** 2026-02-03 03:34:19.038916 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.038927 | orchestrator | 2026-02-03 03:34:19.038938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:19.038949 | orchestrator | Tuesday 03 February 2026 03:34:18 +0000 (0:00:00.208) 0:00:07.989 ****** 2026-02-03 03:34:19.038960 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:19.038971 | orchestrator | 2026-02-03 03:34:19.038988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:27.344761 | orchestrator | Tuesday 03 February 2026 03:34:19 +0000 (0:00:00.224) 0:00:08.213 ****** 2026-02-03 03:34:27.344888 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.344912 | orchestrator | 2026-02-03 03:34:27.344929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:27.344944 | orchestrator | Tuesday 03 February 2026 03:34:19 +0000 (0:00:00.208) 0:00:08.422 ****** 2026-02-03 03:34:27.344958 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-03 03:34:27.344973 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-03 03:34:27.344987 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-03 03:34:27.345021 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-03 03:34:27.345036 | orchestrator | 2026-02-03 03:34:27.345051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:27.345066 | orchestrator | Tuesday 03 February 2026 03:34:20 +0000 (0:00:01.133) 0:00:09.555 ****** 2026-02-03 03:34:27.345081 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.345096 | orchestrator | 2026-02-03 03:34:27.345111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:27.345207 | orchestrator | Tuesday 03 February 2026 03:34:20 +0000 (0:00:00.215) 0:00:09.770 ****** 2026-02-03 03:34:27.345229 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.345244 | orchestrator | 2026-02-03 03:34:27.345259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:27.345275 | orchestrator | Tuesday 03 February 2026 03:34:20 +0000 (0:00:00.206) 0:00:09.977 ****** 2026-02-03 03:34:27.345291 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.345306 | orchestrator | 2026-02-03 03:34:27.345320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:27.345335 | orchestrator | Tuesday 03 February 2026 03:34:21 +0000 (0:00:00.221) 0:00:10.199 ****** 2026-02-03 03:34:27.345349 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.345364 | orchestrator | 2026-02-03 03:34:27.345378 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-03 03:34:27.345392 | orchestrator | Tuesday 03 February 2026 03:34:21 +0000 (0:00:00.262) 0:00:10.461 ****** 2026-02-03 03:34:27.345408 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-03 03:34:27.345423 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-03 03:34:27.345439 | orchestrator | 2026-02-03 03:34:27.345454 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-03 03:34:27.345469 | orchestrator | Tuesday 03 February 2026 03:34:21 +0000 (0:00:00.204) 0:00:10.666 ****** 2026-02-03 03:34:27.345484 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.345500 | orchestrator | 2026-02-03 03:34:27.345514 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-03 03:34:27.345529 | orchestrator | Tuesday 03 February 2026 03:34:21 +0000 (0:00:00.145) 0:00:10.812 ****** 2026-02-03 03:34:27.345571 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.345589 | orchestrator | 2026-02-03 03:34:27.345604 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-03 03:34:27.345620 | orchestrator | Tuesday 03 February 2026 03:34:21 +0000 (0:00:00.146) 0:00:10.958 ****** 2026-02-03 03:34:27.345635 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.345650 | orchestrator | 2026-02-03 03:34:27.345665 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-03 03:34:27.345679 | orchestrator | Tuesday 03 February 2026 03:34:21 +0000 (0:00:00.144) 0:00:11.102 ****** 2026-02-03 03:34:27.345692 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:34:27.345707 | orchestrator | 2026-02-03 03:34:27.345723 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-03 03:34:27.345737 | orchestrator | Tuesday 03 February 2026 03:34:22 +0000 (0:00:00.172) 0:00:11.275 ****** 2026-02-03 03:34:27.345754 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}}) 2026-02-03 03:34:27.345769 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}}) 2026-02-03 03:34:27.345785 | orchestrator | 2026-02-03 03:34:27.345799 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-03 03:34:27.345813 | orchestrator | Tuesday 03 February 2026 03:34:22 +0000 (0:00:00.176) 0:00:11.451 ****** 2026-02-03 03:34:27.345830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}})  2026-02-03 03:34:27.345847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}})  2026-02-03 03:34:27.345861 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.345876 | orchestrator | 2026-02-03 03:34:27.345891 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-03 03:34:27.345906 | orchestrator | Tuesday 03 February 2026 03:34:22 +0000 (0:00:00.389) 0:00:11.841 ****** 2026-02-03 03:34:27.345921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}})  2026-02-03 03:34:27.345933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}})  2026-02-03 03:34:27.345942 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.345951 | orchestrator | 2026-02-03 03:34:27.345959 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-03 03:34:27.345968 | orchestrator | Tuesday 03 February 2026 03:34:22 +0000 (0:00:00.164) 0:00:12.006 ****** 2026-02-03 03:34:27.345977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}})  2026-02-03 03:34:27.346008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}})  2026-02-03 03:34:27.346089 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.346112 | orchestrator | 2026-02-03 03:34:27.346155 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-03 03:34:27.346172 | orchestrator | Tuesday 03 February 2026 03:34:22 +0000 (0:00:00.165) 0:00:12.171 ****** 2026-02-03 03:34:27.346186 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:34:27.346201 | orchestrator | 2026-02-03 03:34:27.346211 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-03 03:34:27.346229 | orchestrator | Tuesday 03 February 2026 03:34:23 +0000 (0:00:00.195) 0:00:12.367 ****** 2026-02-03 03:34:27.346238 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:34:27.346247 | orchestrator | 2026-02-03 03:34:27.346255 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-03 03:34:27.346264 | orchestrator | Tuesday 03 February 2026 03:34:23 +0000 (0:00:00.163) 0:00:12.530 ****** 2026-02-03 03:34:27.346283 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.346291 | orchestrator | 2026-02-03 03:34:27.346300 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-03 03:34:27.346309 | orchestrator | Tuesday 03 February 2026 03:34:23 +0000 (0:00:00.148) 0:00:12.679 ****** 2026-02-03 03:34:27.346318 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.346327 | orchestrator | 2026-02-03 03:34:27.346335 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-03 03:34:27.346344 | orchestrator | Tuesday 03 February 2026 03:34:23 +0000 (0:00:00.173) 0:00:12.852 ****** 2026-02-03 03:34:27.346353 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.346361 | orchestrator | 2026-02-03 03:34:27.346370 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-03 03:34:27.346379 | orchestrator | Tuesday 03 February 2026 03:34:23 +0000 (0:00:00.154) 0:00:13.007 ****** 2026-02-03 03:34:27.346387 | orchestrator | ok: [testbed-node-3] => { 2026-02-03 03:34:27.346396 | orchestrator |  "ceph_osd_devices": { 2026-02-03 03:34:27.346405 | orchestrator |  "sdb": { 2026-02-03 03:34:27.346414 | orchestrator |  "osd_lvm_uuid": "85b6ff9c-bd3f-596f-9d81-0006b9d69e29" 2026-02-03 03:34:27.346423 | orchestrator |  }, 2026-02-03 03:34:27.346432 | orchestrator |  "sdc": { 2026-02-03 03:34:27.346441 | orchestrator |  "osd_lvm_uuid": "bafb60f3-a5a9-526b-adce-8ea58a9a19cd" 2026-02-03 03:34:27.346450 | orchestrator |  } 2026-02-03 03:34:27.346459 | orchestrator |  } 2026-02-03 03:34:27.346468 | orchestrator | } 2026-02-03 03:34:27.346477 | orchestrator | 2026-02-03 03:34:27.346486 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-03 03:34:27.346495 | orchestrator | Tuesday 03 February 2026 03:34:23 +0000 (0:00:00.164) 0:00:13.172 ****** 2026-02-03 03:34:27.346504 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.346512 | orchestrator | 2026-02-03 03:34:27.346521 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-03 03:34:27.346530 | orchestrator | Tuesday 03 February 2026 03:34:24 +0000 (0:00:00.153) 0:00:13.325 ****** 2026-02-03 03:34:27.346538 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.346547 | orchestrator | 2026-02-03 03:34:27.346556 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-03 03:34:27.346565 | orchestrator | Tuesday 03 February 2026 03:34:24 +0000 (0:00:00.148) 0:00:13.474 ****** 2026-02-03 03:34:27.346573 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:34:27.346582 | orchestrator | 2026-02-03 03:34:27.346591 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-03 03:34:27.346599 | orchestrator | Tuesday 03 February 2026 03:34:24 +0000 (0:00:00.143) 0:00:13.618 ****** 2026-02-03 03:34:27.346608 | orchestrator | changed: [testbed-node-3] => { 2026-02-03 03:34:27.346621 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-03 03:34:27.346635 | orchestrator |  "ceph_osd_devices": { 2026-02-03 03:34:27.346657 | orchestrator |  "sdb": { 2026-02-03 03:34:27.346674 | orchestrator |  "osd_lvm_uuid": "85b6ff9c-bd3f-596f-9d81-0006b9d69e29" 2026-02-03 03:34:27.346688 | orchestrator |  }, 2026-02-03 03:34:27.346702 | orchestrator |  "sdc": { 2026-02-03 03:34:27.346717 | orchestrator |  "osd_lvm_uuid": "bafb60f3-a5a9-526b-adce-8ea58a9a19cd" 2026-02-03 03:34:27.346730 | orchestrator |  } 2026-02-03 03:34:27.346745 | orchestrator |  }, 2026-02-03 03:34:27.346759 | orchestrator |  "lvm_volumes": [ 2026-02-03 03:34:27.346774 | orchestrator |  { 2026-02-03 03:34:27.346790 | orchestrator |  "data": "osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29", 2026-02-03 03:34:27.346805 | orchestrator |  "data_vg": "ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29" 2026-02-03 03:34:27.346817 | orchestrator |  }, 2026-02-03 03:34:27.346829 | orchestrator |  { 2026-02-03 03:34:27.346842 | orchestrator |  "data": "osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd", 2026-02-03 03:34:27.346869 | orchestrator |  "data_vg": "ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd" 2026-02-03 03:34:27.346884 | orchestrator |  } 2026-02-03 03:34:27.346898 | orchestrator |  ] 2026-02-03 03:34:27.346913 | orchestrator |  } 2026-02-03 03:34:27.346923 | orchestrator | } 2026-02-03 03:34:27.346931 | orchestrator | 2026-02-03 03:34:27.346940 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-03 03:34:27.346949 | orchestrator | Tuesday 03 February 2026 03:34:24 +0000 (0:00:00.438) 0:00:14.057 ****** 2026-02-03 03:34:27.346957 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 03:34:27.346966 | orchestrator | 2026-02-03 03:34:27.346975 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-03 03:34:27.346983 | orchestrator | 2026-02-03 03:34:27.346992 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-03 03:34:27.347001 | orchestrator | Tuesday 03 February 2026 03:34:26 +0000 (0:00:01.897) 0:00:15.954 ****** 2026-02-03 03:34:27.347009 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-03 03:34:27.347018 | orchestrator | 2026-02-03 03:34:27.347026 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-03 03:34:27.347039 | orchestrator | Tuesday 03 February 2026 03:34:27 +0000 (0:00:00.285) 0:00:16.240 ****** 2026-02-03 03:34:27.347054 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:34:27.347068 | orchestrator | 2026-02-03 03:34:27.347098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.400831 | orchestrator | Tuesday 03 February 2026 03:34:27 +0000 (0:00:00.286) 0:00:16.527 ****** 2026-02-03 03:34:37.400917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-03 03:34:37.400928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-03 03:34:37.400935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-03 03:34:37.400956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-03 03:34:37.400963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-03 03:34:37.400970 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-03 03:34:37.400976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-03 03:34:37.400983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-03 03:34:37.400990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-03 03:34:37.401001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-03 03:34:37.401007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-03 03:34:37.401013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-03 03:34:37.401020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-03 03:34:37.401026 | orchestrator | 2026-02-03 03:34:37.401033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401039 | orchestrator | Tuesday 03 February 2026 03:34:27 +0000 (0:00:00.409) 0:00:16.937 ****** 2026-02-03 03:34:37.401046 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401053 | orchestrator | 2026-02-03 03:34:37.401060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401066 | orchestrator | Tuesday 03 February 2026 03:34:27 +0000 (0:00:00.236) 0:00:17.174 ****** 2026-02-03 03:34:37.401073 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401079 | orchestrator | 2026-02-03 03:34:37.401085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401090 | orchestrator | Tuesday 03 February 2026 03:34:28 +0000 (0:00:00.225) 0:00:17.400 ****** 2026-02-03 03:34:37.401158 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401165 | orchestrator | 2026-02-03 03:34:37.401171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401177 | orchestrator | Tuesday 03 February 2026 03:34:28 +0000 (0:00:00.237) 0:00:17.637 ****** 2026-02-03 03:34:37.401183 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401190 | orchestrator | 2026-02-03 03:34:37.401196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401203 | orchestrator | Tuesday 03 February 2026 03:34:29 +0000 (0:00:00.669) 0:00:18.306 ****** 2026-02-03 03:34:37.401208 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401214 | orchestrator | 2026-02-03 03:34:37.401221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401226 | orchestrator | Tuesday 03 February 2026 03:34:29 +0000 (0:00:00.234) 0:00:18.541 ****** 2026-02-03 03:34:37.401232 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401237 | orchestrator | 2026-02-03 03:34:37.401243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401248 | orchestrator | Tuesday 03 February 2026 03:34:29 +0000 (0:00:00.241) 0:00:18.783 ****** 2026-02-03 03:34:37.401254 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401259 | orchestrator | 2026-02-03 03:34:37.401265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401270 | orchestrator | Tuesday 03 February 2026 03:34:29 +0000 (0:00:00.216) 0:00:19.000 ****** 2026-02-03 03:34:37.401276 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401282 | orchestrator | 2026-02-03 03:34:37.401287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401293 | orchestrator | Tuesday 03 February 2026 03:34:30 +0000 (0:00:00.218) 0:00:19.218 ****** 2026-02-03 03:34:37.401299 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8) 2026-02-03 03:34:37.401306 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8) 2026-02-03 03:34:37.401313 | orchestrator | 2026-02-03 03:34:37.401319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401325 | orchestrator | Tuesday 03 February 2026 03:34:30 +0000 (0:00:00.456) 0:00:19.675 ****** 2026-02-03 03:34:37.401330 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a) 2026-02-03 03:34:37.401336 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a) 2026-02-03 03:34:37.401342 | orchestrator | 2026-02-03 03:34:37.401348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401361 | orchestrator | Tuesday 03 February 2026 03:34:30 +0000 (0:00:00.488) 0:00:20.164 ****** 2026-02-03 03:34:37.401367 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd) 2026-02-03 03:34:37.401372 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd) 2026-02-03 03:34:37.401378 | orchestrator | 2026-02-03 03:34:37.401390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401413 | orchestrator | Tuesday 03 February 2026 03:34:31 +0000 (0:00:00.459) 0:00:20.623 ****** 2026-02-03 03:34:37.401420 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be) 2026-02-03 03:34:37.401425 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be) 2026-02-03 03:34:37.401431 | orchestrator | 2026-02-03 03:34:37.401437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:37.401449 | orchestrator | Tuesday 03 February 2026 03:34:32 +0000 (0:00:00.725) 0:00:21.349 ****** 2026-02-03 03:34:37.401456 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-03 03:34:37.401470 | orchestrator | 2026-02-03 03:34:37.401478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401484 | orchestrator | Tuesday 03 February 2026 03:34:32 +0000 (0:00:00.662) 0:00:22.012 ****** 2026-02-03 03:34:37.401491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-03 03:34:37.401499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-03 03:34:37.401505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-03 03:34:37.401512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-03 03:34:37.401519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-03 03:34:37.401526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-03 03:34:37.401532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-03 03:34:37.401538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-03 03:34:37.401544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-03 03:34:37.401550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-03 03:34:37.401557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-03 03:34:37.401564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-03 03:34:37.401570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-03 03:34:37.401576 | orchestrator | 2026-02-03 03:34:37.401582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401588 | orchestrator | Tuesday 03 February 2026 03:34:33 +0000 (0:00:00.973) 0:00:22.986 ****** 2026-02-03 03:34:37.401596 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401602 | orchestrator | 2026-02-03 03:34:37.401609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401615 | orchestrator | Tuesday 03 February 2026 03:34:34 +0000 (0:00:00.228) 0:00:23.215 ****** 2026-02-03 03:34:37.401620 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401624 | orchestrator | 2026-02-03 03:34:37.401629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401633 | orchestrator | Tuesday 03 February 2026 03:34:34 +0000 (0:00:00.220) 0:00:23.435 ****** 2026-02-03 03:34:37.401638 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401642 | orchestrator | 2026-02-03 03:34:37.401647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401652 | orchestrator | Tuesday 03 February 2026 03:34:34 +0000 (0:00:00.264) 0:00:23.700 ****** 2026-02-03 03:34:37.401656 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401660 | orchestrator | 2026-02-03 03:34:37.401665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401670 | orchestrator | Tuesday 03 February 2026 03:34:34 +0000 (0:00:00.228) 0:00:23.929 ****** 2026-02-03 03:34:37.401674 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401678 | orchestrator | 2026-02-03 03:34:37.401683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401687 | orchestrator | Tuesday 03 February 2026 03:34:34 +0000 (0:00:00.228) 0:00:24.157 ****** 2026-02-03 03:34:37.401692 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401696 | orchestrator | 2026-02-03 03:34:37.401701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401708 | orchestrator | Tuesday 03 February 2026 03:34:35 +0000 (0:00:00.227) 0:00:24.384 ****** 2026-02-03 03:34:37.401714 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401727 | orchestrator | 2026-02-03 03:34:37.401734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401740 | orchestrator | Tuesday 03 February 2026 03:34:35 +0000 (0:00:00.255) 0:00:24.640 ****** 2026-02-03 03:34:37.401746 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:37.401752 | orchestrator | 2026-02-03 03:34:37.401758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401764 | orchestrator | Tuesday 03 February 2026 03:34:35 +0000 (0:00:00.249) 0:00:24.889 ****** 2026-02-03 03:34:37.401771 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-03 03:34:37.401778 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-03 03:34:37.401785 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-03 03:34:37.401791 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-03 03:34:37.401797 | orchestrator | 2026-02-03 03:34:37.401803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:37.401810 | orchestrator | Tuesday 03 February 2026 03:34:36 +0000 (0:00:00.972) 0:00:25.862 ****** 2026-02-03 03:34:37.401816 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.898275 | orchestrator | 2026-02-03 03:34:43.898420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:43.898446 | orchestrator | Tuesday 03 February 2026 03:34:37 +0000 (0:00:00.719) 0:00:26.582 ****** 2026-02-03 03:34:43.898464 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.898481 | orchestrator | 2026-02-03 03:34:43.898496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:43.898513 | orchestrator | Tuesday 03 February 2026 03:34:37 +0000 (0:00:00.238) 0:00:26.821 ****** 2026-02-03 03:34:43.898549 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.898565 | orchestrator | 2026-02-03 03:34:43.898580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:43.898596 | orchestrator | Tuesday 03 February 2026 03:34:37 +0000 (0:00:00.230) 0:00:27.051 ****** 2026-02-03 03:34:43.898611 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.898652 | orchestrator | 2026-02-03 03:34:43.898668 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-03 03:34:43.898685 | orchestrator | Tuesday 03 February 2026 03:34:38 +0000 (0:00:00.216) 0:00:27.267 ****** 2026-02-03 03:34:43.898702 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-03 03:34:43.898720 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-03 03:34:43.898736 | orchestrator | 2026-02-03 03:34:43.898752 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-03 03:34:43.898763 | orchestrator | Tuesday 03 February 2026 03:34:38 +0000 (0:00:00.182) 0:00:27.450 ****** 2026-02-03 03:34:43.898774 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.898791 | orchestrator | 2026-02-03 03:34:43.898807 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-03 03:34:43.898824 | orchestrator | Tuesday 03 February 2026 03:34:38 +0000 (0:00:00.158) 0:00:27.609 ****** 2026-02-03 03:34:43.898840 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.898856 | orchestrator | 2026-02-03 03:34:43.898897 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-03 03:34:43.898915 | orchestrator | Tuesday 03 February 2026 03:34:38 +0000 (0:00:00.154) 0:00:27.763 ****** 2026-02-03 03:34:43.898932 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.898950 | orchestrator | 2026-02-03 03:34:43.898963 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-03 03:34:43.898975 | orchestrator | Tuesday 03 February 2026 03:34:38 +0000 (0:00:00.142) 0:00:27.905 ****** 2026-02-03 03:34:43.898986 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:34:43.898998 | orchestrator | 2026-02-03 03:34:43.899010 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-03 03:34:43.899022 | orchestrator | Tuesday 03 February 2026 03:34:38 +0000 (0:00:00.160) 0:00:28.066 ****** 2026-02-03 03:34:43.899062 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '121565c5-01e5-5794-959e-075d91e35362'}}) 2026-02-03 03:34:43.899074 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a37b12a-042e-589b-8d7d-13944ef33291'}}) 2026-02-03 03:34:43.899087 | orchestrator | 2026-02-03 03:34:43.899098 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-03 03:34:43.899134 | orchestrator | Tuesday 03 February 2026 03:34:39 +0000 (0:00:00.197) 0:00:28.264 ****** 2026-02-03 03:34:43.899147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '121565c5-01e5-5794-959e-075d91e35362'}})  2026-02-03 03:34:43.899161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a37b12a-042e-589b-8d7d-13944ef33291'}})  2026-02-03 03:34:43.899173 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.899185 | orchestrator | 2026-02-03 03:34:43.899195 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-03 03:34:43.899205 | orchestrator | Tuesday 03 February 2026 03:34:39 +0000 (0:00:00.185) 0:00:28.449 ****** 2026-02-03 03:34:43.899214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '121565c5-01e5-5794-959e-075d91e35362'}})  2026-02-03 03:34:43.899224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a37b12a-042e-589b-8d7d-13944ef33291'}})  2026-02-03 03:34:43.899233 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.899243 | orchestrator | 2026-02-03 03:34:43.899253 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-03 03:34:43.899263 | orchestrator | Tuesday 03 February 2026 03:34:39 +0000 (0:00:00.394) 0:00:28.844 ****** 2026-02-03 03:34:43.899272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '121565c5-01e5-5794-959e-075d91e35362'}})  2026-02-03 03:34:43.899282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a37b12a-042e-589b-8d7d-13944ef33291'}})  2026-02-03 03:34:43.899292 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.899301 | orchestrator | 2026-02-03 03:34:43.899311 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-03 03:34:43.899320 | orchestrator | Tuesday 03 February 2026 03:34:39 +0000 (0:00:00.176) 0:00:29.020 ****** 2026-02-03 03:34:43.899330 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:34:43.899339 | orchestrator | 2026-02-03 03:34:43.899349 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-03 03:34:43.899359 | orchestrator | Tuesday 03 February 2026 03:34:40 +0000 (0:00:00.173) 0:00:29.193 ****** 2026-02-03 03:34:43.899368 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:34:43.899378 | orchestrator | 2026-02-03 03:34:43.899387 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-03 03:34:43.899397 | orchestrator | Tuesday 03 February 2026 03:34:40 +0000 (0:00:00.152) 0:00:29.346 ****** 2026-02-03 03:34:43.899427 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.899438 | orchestrator | 2026-02-03 03:34:43.899447 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-03 03:34:43.899457 | orchestrator | Tuesday 03 February 2026 03:34:40 +0000 (0:00:00.163) 0:00:29.510 ****** 2026-02-03 03:34:43.899467 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.899476 | orchestrator | 2026-02-03 03:34:43.899486 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-03 03:34:43.899496 | orchestrator | Tuesday 03 February 2026 03:34:40 +0000 (0:00:00.156) 0:00:29.667 ****** 2026-02-03 03:34:43.899512 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.899522 | orchestrator | 2026-02-03 03:34:43.899532 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-03 03:34:43.899542 | orchestrator | Tuesday 03 February 2026 03:34:40 +0000 (0:00:00.154) 0:00:29.821 ****** 2026-02-03 03:34:43.899562 | orchestrator | ok: [testbed-node-4] => { 2026-02-03 03:34:43.899579 | orchestrator |  "ceph_osd_devices": { 2026-02-03 03:34:43.899596 | orchestrator |  "sdb": { 2026-02-03 03:34:43.899612 | orchestrator |  "osd_lvm_uuid": "121565c5-01e5-5794-959e-075d91e35362" 2026-02-03 03:34:43.899628 | orchestrator |  }, 2026-02-03 03:34:43.899645 | orchestrator |  "sdc": { 2026-02-03 03:34:43.899661 | orchestrator |  "osd_lvm_uuid": "1a37b12a-042e-589b-8d7d-13944ef33291" 2026-02-03 03:34:43.899677 | orchestrator |  } 2026-02-03 03:34:43.899692 | orchestrator |  } 2026-02-03 03:34:43.899710 | orchestrator | } 2026-02-03 03:34:43.899725 | orchestrator | 2026-02-03 03:34:43.899742 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-03 03:34:43.899757 | orchestrator | Tuesday 03 February 2026 03:34:40 +0000 (0:00:00.138) 0:00:29.960 ****** 2026-02-03 03:34:43.899774 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.899791 | orchestrator | 2026-02-03 03:34:43.899809 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-03 03:34:43.899826 | orchestrator | Tuesday 03 February 2026 03:34:40 +0000 (0:00:00.144) 0:00:30.105 ****** 2026-02-03 03:34:43.899842 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.899860 | orchestrator | 2026-02-03 03:34:43.899877 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-03 03:34:43.899893 | orchestrator | Tuesday 03 February 2026 03:34:41 +0000 (0:00:00.149) 0:00:30.255 ****** 2026-02-03 03:34:43.899909 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:34:43.899926 | orchestrator | 2026-02-03 03:34:43.899942 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-03 03:34:43.899958 | orchestrator | Tuesday 03 February 2026 03:34:41 +0000 (0:00:00.157) 0:00:30.412 ****** 2026-02-03 03:34:43.899970 | orchestrator | changed: [testbed-node-4] => { 2026-02-03 03:34:43.899980 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-03 03:34:43.899990 | orchestrator |  "ceph_osd_devices": { 2026-02-03 03:34:43.900000 | orchestrator |  "sdb": { 2026-02-03 03:34:43.900010 | orchestrator |  "osd_lvm_uuid": "121565c5-01e5-5794-959e-075d91e35362" 2026-02-03 03:34:43.900019 | orchestrator |  }, 2026-02-03 03:34:43.900029 | orchestrator |  "sdc": { 2026-02-03 03:34:43.900039 | orchestrator |  "osd_lvm_uuid": "1a37b12a-042e-589b-8d7d-13944ef33291" 2026-02-03 03:34:43.900049 | orchestrator |  } 2026-02-03 03:34:43.900059 | orchestrator |  }, 2026-02-03 03:34:43.900068 | orchestrator |  "lvm_volumes": [ 2026-02-03 03:34:43.900078 | orchestrator |  { 2026-02-03 03:34:43.900089 | orchestrator |  "data": "osd-block-121565c5-01e5-5794-959e-075d91e35362", 2026-02-03 03:34:43.900098 | orchestrator |  "data_vg": "ceph-121565c5-01e5-5794-959e-075d91e35362" 2026-02-03 03:34:43.900176 | orchestrator |  }, 2026-02-03 03:34:43.900187 | orchestrator |  { 2026-02-03 03:34:43.900197 | orchestrator |  "data": "osd-block-1a37b12a-042e-589b-8d7d-13944ef33291", 2026-02-03 03:34:43.900207 | orchestrator |  "data_vg": "ceph-1a37b12a-042e-589b-8d7d-13944ef33291" 2026-02-03 03:34:43.900217 | orchestrator |  } 2026-02-03 03:34:43.900227 | orchestrator |  ] 2026-02-03 03:34:43.900236 | orchestrator |  } 2026-02-03 03:34:43.900244 | orchestrator | } 2026-02-03 03:34:43.900252 | orchestrator | 2026-02-03 03:34:43.900260 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-03 03:34:43.900268 | orchestrator | Tuesday 03 February 2026 03:34:41 +0000 (0:00:00.463) 0:00:30.875 ****** 2026-02-03 03:34:43.900276 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-03 03:34:43.900284 | orchestrator | 2026-02-03 03:34:43.900292 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-03 03:34:43.900299 | orchestrator | 2026-02-03 03:34:43.900307 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-03 03:34:43.900315 | orchestrator | Tuesday 03 February 2026 03:34:42 +0000 (0:00:01.250) 0:00:32.126 ****** 2026-02-03 03:34:43.900334 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-03 03:34:43.900342 | orchestrator | 2026-02-03 03:34:43.900350 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-03 03:34:43.900358 | orchestrator | Tuesday 03 February 2026 03:34:43 +0000 (0:00:00.324) 0:00:32.450 ****** 2026-02-03 03:34:43.900365 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:34:43.900373 | orchestrator | 2026-02-03 03:34:43.900381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:43.900389 | orchestrator | Tuesday 03 February 2026 03:34:43 +0000 (0:00:00.243) 0:00:32.694 ****** 2026-02-03 03:34:43.900397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-03 03:34:43.900405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-03 03:34:43.900413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-03 03:34:43.900420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-03 03:34:43.900428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-03 03:34:43.900446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-03 03:34:53.119587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-03 03:34:53.119733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-03 03:34:53.119746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-03 03:34:53.119752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-03 03:34:53.119769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-03 03:34:53.119775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-03 03:34:53.119780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-03 03:34:53.119785 | orchestrator | 2026-02-03 03:34:53.119791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.119796 | orchestrator | Tuesday 03 February 2026 03:34:43 +0000 (0:00:00.381) 0:00:33.075 ****** 2026-02-03 03:34:53.119801 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.119808 | orchestrator | 2026-02-03 03:34:53.119813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.119818 | orchestrator | Tuesday 03 February 2026 03:34:44 +0000 (0:00:00.215) 0:00:33.290 ****** 2026-02-03 03:34:53.119823 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.119828 | orchestrator | 2026-02-03 03:34:53.119833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.119838 | orchestrator | Tuesday 03 February 2026 03:34:44 +0000 (0:00:00.219) 0:00:33.510 ****** 2026-02-03 03:34:53.119842 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.119847 | orchestrator | 2026-02-03 03:34:53.119852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.119857 | orchestrator | Tuesday 03 February 2026 03:34:44 +0000 (0:00:00.209) 0:00:33.720 ****** 2026-02-03 03:34:53.119861 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.119866 | orchestrator | 2026-02-03 03:34:53.119870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.119875 | orchestrator | Tuesday 03 February 2026 03:34:45 +0000 (0:00:00.689) 0:00:34.409 ****** 2026-02-03 03:34:53.119880 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.119885 | orchestrator | 2026-02-03 03:34:53.119889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.119894 | orchestrator | Tuesday 03 February 2026 03:34:45 +0000 (0:00:00.251) 0:00:34.660 ****** 2026-02-03 03:34:53.119912 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.119917 | orchestrator | 2026-02-03 03:34:53.119922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.119927 | orchestrator | Tuesday 03 February 2026 03:34:45 +0000 (0:00:00.222) 0:00:34.883 ****** 2026-02-03 03:34:53.119931 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.119936 | orchestrator | 2026-02-03 03:34:53.119941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.119945 | orchestrator | Tuesday 03 February 2026 03:34:45 +0000 (0:00:00.215) 0:00:35.099 ****** 2026-02-03 03:34:53.119950 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.119955 | orchestrator | 2026-02-03 03:34:53.119959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.119964 | orchestrator | Tuesday 03 February 2026 03:34:46 +0000 (0:00:00.231) 0:00:35.330 ****** 2026-02-03 03:34:53.119971 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457) 2026-02-03 03:34:53.119980 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457) 2026-02-03 03:34:53.119987 | orchestrator | 2026-02-03 03:34:53.119995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.120002 | orchestrator | Tuesday 03 February 2026 03:34:46 +0000 (0:00:00.447) 0:00:35.778 ****** 2026-02-03 03:34:53.120010 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5) 2026-02-03 03:34:53.120018 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5) 2026-02-03 03:34:53.120025 | orchestrator | 2026-02-03 03:34:53.120032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.120040 | orchestrator | Tuesday 03 February 2026 03:34:47 +0000 (0:00:00.480) 0:00:36.258 ****** 2026-02-03 03:34:53.120047 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8) 2026-02-03 03:34:53.120055 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8) 2026-02-03 03:34:53.120061 | orchestrator | 2026-02-03 03:34:53.120069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.120077 | orchestrator | Tuesday 03 February 2026 03:34:47 +0000 (0:00:00.467) 0:00:36.726 ****** 2026-02-03 03:34:53.120085 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308) 2026-02-03 03:34:53.120133 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308) 2026-02-03 03:34:53.120139 | orchestrator | 2026-02-03 03:34:53.120146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:34:53.120154 | orchestrator | Tuesday 03 February 2026 03:34:47 +0000 (0:00:00.440) 0:00:37.167 ****** 2026-02-03 03:34:53.120162 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-03 03:34:53.120169 | orchestrator | 2026-02-03 03:34:53.120177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120201 | orchestrator | Tuesday 03 February 2026 03:34:48 +0000 (0:00:00.326) 0:00:37.493 ****** 2026-02-03 03:34:53.120209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-03 03:34:53.120216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-03 03:34:53.120222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-03 03:34:53.120236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-03 03:34:53.120244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-03 03:34:53.120251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-03 03:34:53.120265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-03 03:34:53.120272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-03 03:34:53.120279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-03 03:34:53.120285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-03 03:34:53.120293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-03 03:34:53.120300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-03 03:34:53.120307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-03 03:34:53.120314 | orchestrator | 2026-02-03 03:34:53.120321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120330 | orchestrator | Tuesday 03 February 2026 03:34:48 +0000 (0:00:00.628) 0:00:38.122 ****** 2026-02-03 03:34:53.120337 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120345 | orchestrator | 2026-02-03 03:34:53.120352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120360 | orchestrator | Tuesday 03 February 2026 03:34:49 +0000 (0:00:00.227) 0:00:38.350 ****** 2026-02-03 03:34:53.120367 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120375 | orchestrator | 2026-02-03 03:34:53.120381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120389 | orchestrator | Tuesday 03 February 2026 03:34:49 +0000 (0:00:00.235) 0:00:38.585 ****** 2026-02-03 03:34:53.120396 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120403 | orchestrator | 2026-02-03 03:34:53.120411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120418 | orchestrator | Tuesday 03 February 2026 03:34:49 +0000 (0:00:00.212) 0:00:38.798 ****** 2026-02-03 03:34:53.120426 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120434 | orchestrator | 2026-02-03 03:34:53.120442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120449 | orchestrator | Tuesday 03 February 2026 03:34:49 +0000 (0:00:00.222) 0:00:39.021 ****** 2026-02-03 03:34:53.120456 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120463 | orchestrator | 2026-02-03 03:34:53.120470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120477 | orchestrator | Tuesday 03 February 2026 03:34:50 +0000 (0:00:00.215) 0:00:39.237 ****** 2026-02-03 03:34:53.120484 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120492 | orchestrator | 2026-02-03 03:34:53.120498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120505 | orchestrator | Tuesday 03 February 2026 03:34:50 +0000 (0:00:00.224) 0:00:39.461 ****** 2026-02-03 03:34:53.120511 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120518 | orchestrator | 2026-02-03 03:34:53.120525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120532 | orchestrator | Tuesday 03 February 2026 03:34:50 +0000 (0:00:00.252) 0:00:39.713 ****** 2026-02-03 03:34:53.120539 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120546 | orchestrator | 2026-02-03 03:34:53.120553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120560 | orchestrator | Tuesday 03 February 2026 03:34:50 +0000 (0:00:00.212) 0:00:39.926 ****** 2026-02-03 03:34:53.120568 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-03 03:34:53.120575 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-03 03:34:53.120584 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-03 03:34:53.120591 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-03 03:34:53.120598 | orchestrator | 2026-02-03 03:34:53.120614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120622 | orchestrator | Tuesday 03 February 2026 03:34:51 +0000 (0:00:00.934) 0:00:40.860 ****** 2026-02-03 03:34:53.120630 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120636 | orchestrator | 2026-02-03 03:34:53.120643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120650 | orchestrator | Tuesday 03 February 2026 03:34:51 +0000 (0:00:00.216) 0:00:41.077 ****** 2026-02-03 03:34:53.120658 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120666 | orchestrator | 2026-02-03 03:34:53.120672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120679 | orchestrator | Tuesday 03 February 2026 03:34:52 +0000 (0:00:00.237) 0:00:41.314 ****** 2026-02-03 03:34:53.120685 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120693 | orchestrator | 2026-02-03 03:34:53.120700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:34:53.120707 | orchestrator | Tuesday 03 February 2026 03:34:52 +0000 (0:00:00.760) 0:00:42.075 ****** 2026-02-03 03:34:53.120715 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:53.120722 | orchestrator | 2026-02-03 03:34:53.120741 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-03 03:34:57.581221 | orchestrator | Tuesday 03 February 2026 03:34:53 +0000 (0:00:00.224) 0:00:42.299 ****** 2026-02-03 03:34:57.581330 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-03 03:34:57.581346 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-03 03:34:57.581357 | orchestrator | 2026-02-03 03:34:57.581369 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-03 03:34:57.581398 | orchestrator | Tuesday 03 February 2026 03:34:53 +0000 (0:00:00.202) 0:00:42.502 ****** 2026-02-03 03:34:57.581409 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.581419 | orchestrator | 2026-02-03 03:34:57.581429 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-03 03:34:57.581439 | orchestrator | Tuesday 03 February 2026 03:34:53 +0000 (0:00:00.174) 0:00:42.676 ****** 2026-02-03 03:34:57.581449 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.581459 | orchestrator | 2026-02-03 03:34:57.581469 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-03 03:34:57.581479 | orchestrator | Tuesday 03 February 2026 03:34:53 +0000 (0:00:00.150) 0:00:42.827 ****** 2026-02-03 03:34:57.581489 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.581498 | orchestrator | 2026-02-03 03:34:57.581508 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-03 03:34:57.581518 | orchestrator | Tuesday 03 February 2026 03:34:53 +0000 (0:00:00.130) 0:00:42.957 ****** 2026-02-03 03:34:57.581528 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:34:57.581538 | orchestrator | 2026-02-03 03:34:57.581548 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-03 03:34:57.581558 | orchestrator | Tuesday 03 February 2026 03:34:53 +0000 (0:00:00.155) 0:00:43.113 ****** 2026-02-03 03:34:57.581568 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9cbb71d1-90c1-5063-b304-f845b9e79bfb'}}) 2026-02-03 03:34:57.581579 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}}) 2026-02-03 03:34:57.581588 | orchestrator | 2026-02-03 03:34:57.581598 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-03 03:34:57.581608 | orchestrator | Tuesday 03 February 2026 03:34:54 +0000 (0:00:00.194) 0:00:43.307 ****** 2026-02-03 03:34:57.581618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9cbb71d1-90c1-5063-b304-f845b9e79bfb'}})  2026-02-03 03:34:57.581630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}})  2026-02-03 03:34:57.581640 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.581669 | orchestrator | 2026-02-03 03:34:57.581680 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-03 03:34:57.581689 | orchestrator | Tuesday 03 February 2026 03:34:54 +0000 (0:00:00.162) 0:00:43.470 ****** 2026-02-03 03:34:57.581699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9cbb71d1-90c1-5063-b304-f845b9e79bfb'}})  2026-02-03 03:34:57.581708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}})  2026-02-03 03:34:57.581718 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.581730 | orchestrator | 2026-02-03 03:34:57.581742 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-03 03:34:57.581754 | orchestrator | Tuesday 03 February 2026 03:34:54 +0000 (0:00:00.185) 0:00:43.655 ****** 2026-02-03 03:34:57.581765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9cbb71d1-90c1-5063-b304-f845b9e79bfb'}})  2026-02-03 03:34:57.581777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}})  2026-02-03 03:34:57.581789 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.581800 | orchestrator | 2026-02-03 03:34:57.581811 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-03 03:34:57.581823 | orchestrator | Tuesday 03 February 2026 03:34:54 +0000 (0:00:00.193) 0:00:43.849 ****** 2026-02-03 03:34:57.581834 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:34:57.581846 | orchestrator | 2026-02-03 03:34:57.581856 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-03 03:34:57.581868 | orchestrator | Tuesday 03 February 2026 03:34:54 +0000 (0:00:00.151) 0:00:44.001 ****** 2026-02-03 03:34:57.581879 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:34:57.581890 | orchestrator | 2026-02-03 03:34:57.581902 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-03 03:34:57.581911 | orchestrator | Tuesday 03 February 2026 03:34:55 +0000 (0:00:00.395) 0:00:44.396 ****** 2026-02-03 03:34:57.581921 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.581930 | orchestrator | 2026-02-03 03:34:57.581940 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-03 03:34:57.581950 | orchestrator | Tuesday 03 February 2026 03:34:55 +0000 (0:00:00.181) 0:00:44.577 ****** 2026-02-03 03:34:57.581960 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.581969 | orchestrator | 2026-02-03 03:34:57.581979 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-03 03:34:57.581988 | orchestrator | Tuesday 03 February 2026 03:34:55 +0000 (0:00:00.149) 0:00:44.726 ****** 2026-02-03 03:34:57.581998 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.582008 | orchestrator | 2026-02-03 03:34:57.582072 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-03 03:34:57.582104 | orchestrator | Tuesday 03 February 2026 03:34:55 +0000 (0:00:00.129) 0:00:44.856 ****** 2026-02-03 03:34:57.582115 | orchestrator | ok: [testbed-node-5] => { 2026-02-03 03:34:57.582124 | orchestrator |  "ceph_osd_devices": { 2026-02-03 03:34:57.582134 | orchestrator |  "sdb": { 2026-02-03 03:34:57.582162 | orchestrator |  "osd_lvm_uuid": "9cbb71d1-90c1-5063-b304-f845b9e79bfb" 2026-02-03 03:34:57.582173 | orchestrator |  }, 2026-02-03 03:34:57.582183 | orchestrator |  "sdc": { 2026-02-03 03:34:57.582193 | orchestrator |  "osd_lvm_uuid": "77c51d77-cdc1-5563-af81-33d9bc4e9bd8" 2026-02-03 03:34:57.582203 | orchestrator |  } 2026-02-03 03:34:57.582213 | orchestrator |  } 2026-02-03 03:34:57.582223 | orchestrator | } 2026-02-03 03:34:57.582233 | orchestrator | 2026-02-03 03:34:57.582248 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-03 03:34:57.582258 | orchestrator | Tuesday 03 February 2026 03:34:55 +0000 (0:00:00.146) 0:00:45.003 ****** 2026-02-03 03:34:57.582268 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.582285 | orchestrator | 2026-02-03 03:34:57.582295 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-03 03:34:57.582305 | orchestrator | Tuesday 03 February 2026 03:34:55 +0000 (0:00:00.147) 0:00:45.150 ****** 2026-02-03 03:34:57.582314 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.582324 | orchestrator | 2026-02-03 03:34:57.582334 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-03 03:34:57.582343 | orchestrator | Tuesday 03 February 2026 03:34:56 +0000 (0:00:00.148) 0:00:45.299 ****** 2026-02-03 03:34:57.582353 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:34:57.582363 | orchestrator | 2026-02-03 03:34:57.582372 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-03 03:34:57.582382 | orchestrator | Tuesday 03 February 2026 03:34:56 +0000 (0:00:00.138) 0:00:45.437 ****** 2026-02-03 03:34:57.582392 | orchestrator | changed: [testbed-node-5] => { 2026-02-03 03:34:57.582402 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-03 03:34:57.582412 | orchestrator |  "ceph_osd_devices": { 2026-02-03 03:34:57.582421 | orchestrator |  "sdb": { 2026-02-03 03:34:57.582431 | orchestrator |  "osd_lvm_uuid": "9cbb71d1-90c1-5063-b304-f845b9e79bfb" 2026-02-03 03:34:57.582441 | orchestrator |  }, 2026-02-03 03:34:57.582451 | orchestrator |  "sdc": { 2026-02-03 03:34:57.582460 | orchestrator |  "osd_lvm_uuid": "77c51d77-cdc1-5563-af81-33d9bc4e9bd8" 2026-02-03 03:34:57.582470 | orchestrator |  } 2026-02-03 03:34:57.582480 | orchestrator |  }, 2026-02-03 03:34:57.582490 | orchestrator |  "lvm_volumes": [ 2026-02-03 03:34:57.582499 | orchestrator |  { 2026-02-03 03:34:57.582509 | orchestrator |  "data": "osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb", 2026-02-03 03:34:57.582519 | orchestrator |  "data_vg": "ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb" 2026-02-03 03:34:57.582529 | orchestrator |  }, 2026-02-03 03:34:57.582539 | orchestrator |  { 2026-02-03 03:34:57.582548 | orchestrator |  "data": "osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8", 2026-02-03 03:34:57.582558 | orchestrator |  "data_vg": "ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8" 2026-02-03 03:34:57.582568 | orchestrator |  } 2026-02-03 03:34:57.582578 | orchestrator |  ] 2026-02-03 03:34:57.582588 | orchestrator |  } 2026-02-03 03:34:57.582597 | orchestrator | } 2026-02-03 03:34:57.582607 | orchestrator | 2026-02-03 03:34:57.582617 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-03 03:34:57.582627 | orchestrator | Tuesday 03 February 2026 03:34:56 +0000 (0:00:00.225) 0:00:45.663 ****** 2026-02-03 03:34:57.582636 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-03 03:34:57.582646 | orchestrator | 2026-02-03 03:34:57.582656 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:34:57.582666 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-03 03:34:57.582677 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-03 03:34:57.582687 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-03 03:34:57.582697 | orchestrator | 2026-02-03 03:34:57.582707 | orchestrator | 2026-02-03 03:34:57.582716 | orchestrator | 2026-02-03 03:34:57.582726 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:34:57.582736 | orchestrator | Tuesday 03 February 2026 03:34:57 +0000 (0:00:01.093) 0:00:46.756 ****** 2026-02-03 03:34:57.582745 | orchestrator | =============================================================================== 2026-02-03 03:34:57.582755 | orchestrator | Write configuration file ------------------------------------------------ 4.24s 2026-02-03 03:34:57.582771 | orchestrator | Add known partitions to the list of available block devices ------------- 2.00s 2026-02-03 03:34:57.582780 | orchestrator | Add known links to the list of available block devices ------------------ 1.31s 2026-02-03 03:34:57.582790 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2026-02-03 03:34:57.582799 | orchestrator | Print configuration data ------------------------------------------------ 1.13s 2026-02-03 03:34:57.582809 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2026-02-03 03:34:57.582819 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-02-03 03:34:57.582828 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-02-03 03:34:57.582838 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.89s 2026-02-03 03:34:57.582848 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2026-02-03 03:34:57.582857 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-02-03 03:34:57.582867 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.74s 2026-02-03 03:34:57.582877 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.74s 2026-02-03 03:34:57.582892 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-02-03 03:34:58.055706 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-03 03:34:58.055788 | orchestrator | Set OSD devices config data --------------------------------------------- 0.71s 2026-02-03 03:34:58.055797 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-02-03 03:34:58.055819 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-02-03 03:34:58.055826 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-03 03:34:58.055833 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-02-03 03:35:20.734475 | orchestrator | 2026-02-03 03:35:20 | INFO  | Task 96652b15-9733-4191-a455-bc9542831bd8 (sync inventory) is running in background. Output coming soon. 2026-02-03 03:35:51.336997 | orchestrator | 2026-02-03 03:35:22 | INFO  | Starting group_vars file reorganization 2026-02-03 03:35:51.337152 | orchestrator | 2026-02-03 03:35:22 | INFO  | Moved 0 file(s) to their respective directories 2026-02-03 03:35:51.337168 | orchestrator | 2026-02-03 03:35:22 | INFO  | Group_vars file reorganization completed 2026-02-03 03:35:51.337179 | orchestrator | 2026-02-03 03:35:25 | INFO  | Starting variable preparation from inventory 2026-02-03 03:35:51.337189 | orchestrator | 2026-02-03 03:35:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-03 03:35:51.337199 | orchestrator | 2026-02-03 03:35:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-03 03:35:51.337209 | orchestrator | 2026-02-03 03:35:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-03 03:35:51.337219 | orchestrator | 2026-02-03 03:35:28 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-03 03:35:51.337229 | orchestrator | 2026-02-03 03:35:28 | INFO  | Variable preparation completed 2026-02-03 03:35:51.337239 | orchestrator | 2026-02-03 03:35:30 | INFO  | Starting inventory overwrite handling 2026-02-03 03:35:51.337249 | orchestrator | 2026-02-03 03:35:30 | INFO  | Handling group overwrites in 99-overwrite 2026-02-03 03:35:51.337259 | orchestrator | 2026-02-03 03:35:30 | INFO  | Removing group frr:children from 60-generic 2026-02-03 03:35:51.337268 | orchestrator | 2026-02-03 03:35:30 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-03 03:35:51.337278 | orchestrator | 2026-02-03 03:35:30 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-03 03:35:51.337315 | orchestrator | 2026-02-03 03:35:30 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-03 03:35:51.337326 | orchestrator | 2026-02-03 03:35:30 | INFO  | Handling group overwrites in 20-roles 2026-02-03 03:35:51.337336 | orchestrator | 2026-02-03 03:35:30 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-03 03:35:51.337346 | orchestrator | 2026-02-03 03:35:30 | INFO  | Removed 5 group(s) in total 2026-02-03 03:35:51.337355 | orchestrator | 2026-02-03 03:35:30 | INFO  | Inventory overwrite handling completed 2026-02-03 03:35:51.337365 | orchestrator | 2026-02-03 03:35:31 | INFO  | Starting merge of inventory files 2026-02-03 03:35:51.337375 | orchestrator | 2026-02-03 03:35:31 | INFO  | Inventory files merged successfully 2026-02-03 03:35:51.337385 | orchestrator | 2026-02-03 03:35:37 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-03 03:35:51.337395 | orchestrator | 2026-02-03 03:35:49 | INFO  | Successfully wrote ClusterShell configuration 2026-02-03 03:35:51.337405 | orchestrator | [master 84b14e9] 2026-02-03-03-35 2026-02-03 03:35:51.337416 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-03 03:35:53.847384 | orchestrator | 2026-02-03 03:35:53 | INFO  | Task 0df9c829-9b5b-4d9f-8002-1e750b7ade3e (ceph-create-lvm-devices) was prepared for execution. 2026-02-03 03:35:53.847485 | orchestrator | 2026-02-03 03:35:53 | INFO  | It takes a moment until task 0df9c829-9b5b-4d9f-8002-1e750b7ade3e (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-03 03:36:06.479825 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-03 03:36:06.479955 | orchestrator | 2.16.14 2026-02-03 03:36:06.479977 | orchestrator | 2026-02-03 03:36:06.480063 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-03 03:36:06.480081 | orchestrator | 2026-02-03 03:36:06.480095 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-03 03:36:06.480109 | orchestrator | Tuesday 03 February 2026 03:35:58 +0000 (0:00:00.337) 0:00:00.337 ****** 2026-02-03 03:36:06.480122 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 03:36:06.480135 | orchestrator | 2026-02-03 03:36:06.480149 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-03 03:36:06.480163 | orchestrator | Tuesday 03 February 2026 03:35:58 +0000 (0:00:00.256) 0:00:00.594 ****** 2026-02-03 03:36:06.480177 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:36:06.480266 | orchestrator | 2026-02-03 03:36:06.480285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.480298 | orchestrator | Tuesday 03 February 2026 03:35:58 +0000 (0:00:00.249) 0:00:00.844 ****** 2026-02-03 03:36:06.480312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-03 03:36:06.480325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-03 03:36:06.480358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-03 03:36:06.480371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-03 03:36:06.480384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-03 03:36:06.480398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-03 03:36:06.480411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-03 03:36:06.480424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-03 03:36:06.480437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-03 03:36:06.480451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-03 03:36:06.480490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-03 03:36:06.480504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-03 03:36:06.480517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-03 03:36:06.480531 | orchestrator | 2026-02-03 03:36:06.480545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.480557 | orchestrator | Tuesday 03 February 2026 03:35:59 +0000 (0:00:00.538) 0:00:01.382 ****** 2026-02-03 03:36:06.480572 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.480585 | orchestrator | 2026-02-03 03:36:06.480599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.480613 | orchestrator | Tuesday 03 February 2026 03:35:59 +0000 (0:00:00.216) 0:00:01.599 ****** 2026-02-03 03:36:06.480626 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.480694 | orchestrator | 2026-02-03 03:36:06.480708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.480722 | orchestrator | Tuesday 03 February 2026 03:35:59 +0000 (0:00:00.228) 0:00:01.827 ****** 2026-02-03 03:36:06.480736 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.480748 | orchestrator | 2026-02-03 03:36:06.480760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.480774 | orchestrator | Tuesday 03 February 2026 03:36:00 +0000 (0:00:00.224) 0:00:02.051 ****** 2026-02-03 03:36:06.480787 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.480800 | orchestrator | 2026-02-03 03:36:06.480814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.480826 | orchestrator | Tuesday 03 February 2026 03:36:00 +0000 (0:00:00.214) 0:00:02.266 ****** 2026-02-03 03:36:06.480840 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.480853 | orchestrator | 2026-02-03 03:36:06.480867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.480880 | orchestrator | Tuesday 03 February 2026 03:36:00 +0000 (0:00:00.223) 0:00:02.490 ****** 2026-02-03 03:36:06.480893 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.480907 | orchestrator | 2026-02-03 03:36:06.480920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.480934 | orchestrator | Tuesday 03 February 2026 03:36:00 +0000 (0:00:00.199) 0:00:02.690 ****** 2026-02-03 03:36:06.480948 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.480960 | orchestrator | 2026-02-03 03:36:06.480972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.480985 | orchestrator | Tuesday 03 February 2026 03:36:00 +0000 (0:00:00.226) 0:00:02.916 ****** 2026-02-03 03:36:06.481020 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.481033 | orchestrator | 2026-02-03 03:36:06.481046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.481058 | orchestrator | Tuesday 03 February 2026 03:36:01 +0000 (0:00:00.228) 0:00:03.145 ****** 2026-02-03 03:36:06.481071 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371) 2026-02-03 03:36:06.481087 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371) 2026-02-03 03:36:06.481100 | orchestrator | 2026-02-03 03:36:06.481113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.481153 | orchestrator | Tuesday 03 February 2026 03:36:01 +0000 (0:00:00.456) 0:00:03.601 ****** 2026-02-03 03:36:06.481163 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f) 2026-02-03 03:36:06.481171 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f) 2026-02-03 03:36:06.481179 | orchestrator | 2026-02-03 03:36:06.481188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.481208 | orchestrator | Tuesday 03 February 2026 03:36:02 +0000 (0:00:00.676) 0:00:04.278 ****** 2026-02-03 03:36:06.481216 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e) 2026-02-03 03:36:06.481224 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e) 2026-02-03 03:36:06.481232 | orchestrator | 2026-02-03 03:36:06.481240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.481248 | orchestrator | Tuesday 03 February 2026 03:36:03 +0000 (0:00:00.683) 0:00:04.961 ****** 2026-02-03 03:36:06.481256 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3) 2026-02-03 03:36:06.481273 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3) 2026-02-03 03:36:06.481281 | orchestrator | 2026-02-03 03:36:06.481289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:06.481298 | orchestrator | Tuesday 03 February 2026 03:36:04 +0000 (0:00:00.970) 0:00:05.932 ****** 2026-02-03 03:36:06.481306 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-03 03:36:06.481314 | orchestrator | 2026-02-03 03:36:06.481322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:06.481330 | orchestrator | Tuesday 03 February 2026 03:36:04 +0000 (0:00:00.400) 0:00:06.332 ****** 2026-02-03 03:36:06.481338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-03 03:36:06.481346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-03 03:36:06.481354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-03 03:36:06.481362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-03 03:36:06.481370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-03 03:36:06.481378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-03 03:36:06.481386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-03 03:36:06.481393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-03 03:36:06.481401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-03 03:36:06.481409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-03 03:36:06.481417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-03 03:36:06.481425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-03 03:36:06.481432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-03 03:36:06.481440 | orchestrator | 2026-02-03 03:36:06.481449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:06.481456 | orchestrator | Tuesday 03 February 2026 03:36:04 +0000 (0:00:00.461) 0:00:06.794 ****** 2026-02-03 03:36:06.481464 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.481472 | orchestrator | 2026-02-03 03:36:06.481480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:06.481488 | orchestrator | Tuesday 03 February 2026 03:36:05 +0000 (0:00:00.226) 0:00:07.021 ****** 2026-02-03 03:36:06.481496 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.481503 | orchestrator | 2026-02-03 03:36:06.481512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:06.481519 | orchestrator | Tuesday 03 February 2026 03:36:05 +0000 (0:00:00.224) 0:00:07.246 ****** 2026-02-03 03:36:06.481527 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.481540 | orchestrator | 2026-02-03 03:36:06.481548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:06.481557 | orchestrator | Tuesday 03 February 2026 03:36:05 +0000 (0:00:00.209) 0:00:07.455 ****** 2026-02-03 03:36:06.481564 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.481572 | orchestrator | 2026-02-03 03:36:06.481580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:06.481588 | orchestrator | Tuesday 03 February 2026 03:36:05 +0000 (0:00:00.214) 0:00:07.669 ****** 2026-02-03 03:36:06.481596 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.481604 | orchestrator | 2026-02-03 03:36:06.481612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:06.481620 | orchestrator | Tuesday 03 February 2026 03:36:05 +0000 (0:00:00.244) 0:00:07.914 ****** 2026-02-03 03:36:06.481628 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.481636 | orchestrator | 2026-02-03 03:36:06.481643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:06.481651 | orchestrator | Tuesday 03 February 2026 03:36:06 +0000 (0:00:00.245) 0:00:08.159 ****** 2026-02-03 03:36:06.481659 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:06.481667 | orchestrator | 2026-02-03 03:36:06.481680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:15.133474 | orchestrator | Tuesday 03 February 2026 03:36:06 +0000 (0:00:00.224) 0:00:08.384 ****** 2026-02-03 03:36:15.133574 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.133584 | orchestrator | 2026-02-03 03:36:15.133592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:15.133599 | orchestrator | Tuesday 03 February 2026 03:36:07 +0000 (0:00:00.721) 0:00:09.105 ****** 2026-02-03 03:36:15.133606 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-03 03:36:15.133613 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-03 03:36:15.133619 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-03 03:36:15.133625 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-03 03:36:15.133631 | orchestrator | 2026-02-03 03:36:15.133637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:15.133644 | orchestrator | Tuesday 03 February 2026 03:36:07 +0000 (0:00:00.728) 0:00:09.834 ****** 2026-02-03 03:36:15.133650 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.133655 | orchestrator | 2026-02-03 03:36:15.133662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:15.133668 | orchestrator | Tuesday 03 February 2026 03:36:08 +0000 (0:00:00.222) 0:00:10.056 ****** 2026-02-03 03:36:15.133674 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.133680 | orchestrator | 2026-02-03 03:36:15.133703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:15.133710 | orchestrator | Tuesday 03 February 2026 03:36:08 +0000 (0:00:00.230) 0:00:10.287 ****** 2026-02-03 03:36:15.133716 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.133722 | orchestrator | 2026-02-03 03:36:15.133728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:15.133734 | orchestrator | Tuesday 03 February 2026 03:36:08 +0000 (0:00:00.216) 0:00:10.503 ****** 2026-02-03 03:36:15.133741 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.133747 | orchestrator | 2026-02-03 03:36:15.133754 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-03 03:36:15.133760 | orchestrator | Tuesday 03 February 2026 03:36:08 +0000 (0:00:00.221) 0:00:10.724 ****** 2026-02-03 03:36:15.133766 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.133772 | orchestrator | 2026-02-03 03:36:15.133779 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-03 03:36:15.133785 | orchestrator | Tuesday 03 February 2026 03:36:08 +0000 (0:00:00.148) 0:00:10.873 ****** 2026-02-03 03:36:15.133792 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}}) 2026-02-03 03:36:15.133818 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}}) 2026-02-03 03:36:15.133824 | orchestrator | 2026-02-03 03:36:15.133830 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-03 03:36:15.133837 | orchestrator | Tuesday 03 February 2026 03:36:09 +0000 (0:00:00.197) 0:00:11.071 ****** 2026-02-03 03:36:15.133844 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}) 2026-02-03 03:36:15.133852 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}) 2026-02-03 03:36:15.133859 | orchestrator | 2026-02-03 03:36:15.133865 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-03 03:36:15.133871 | orchestrator | Tuesday 03 February 2026 03:36:11 +0000 (0:00:02.081) 0:00:13.152 ****** 2026-02-03 03:36:15.133876 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:15.133884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:15.133890 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.133897 | orchestrator | 2026-02-03 03:36:15.133903 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-03 03:36:15.133909 | orchestrator | Tuesday 03 February 2026 03:36:11 +0000 (0:00:00.165) 0:00:13.318 ****** 2026-02-03 03:36:15.133916 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}) 2026-02-03 03:36:15.133922 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}) 2026-02-03 03:36:15.133929 | orchestrator | 2026-02-03 03:36:15.133935 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-03 03:36:15.133941 | orchestrator | Tuesday 03 February 2026 03:36:12 +0000 (0:00:01.500) 0:00:14.818 ****** 2026-02-03 03:36:15.133948 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:15.133954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:15.133961 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.133967 | orchestrator | 2026-02-03 03:36:15.133973 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-03 03:36:15.134098 | orchestrator | Tuesday 03 February 2026 03:36:13 +0000 (0:00:00.165) 0:00:14.984 ****** 2026-02-03 03:36:15.134131 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134138 | orchestrator | 2026-02-03 03:36:15.134145 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-03 03:36:15.134152 | orchestrator | Tuesday 03 February 2026 03:36:13 +0000 (0:00:00.398) 0:00:15.383 ****** 2026-02-03 03:36:15.134160 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:15.134167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:15.134174 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134181 | orchestrator | 2026-02-03 03:36:15.134188 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-03 03:36:15.134195 | orchestrator | Tuesday 03 February 2026 03:36:13 +0000 (0:00:00.167) 0:00:15.550 ****** 2026-02-03 03:36:15.134209 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134217 | orchestrator | 2026-02-03 03:36:15.134224 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-03 03:36:15.134231 | orchestrator | Tuesday 03 February 2026 03:36:13 +0000 (0:00:00.151) 0:00:15.702 ****** 2026-02-03 03:36:15.134243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:15.134251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:15.134258 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134265 | orchestrator | 2026-02-03 03:36:15.134272 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-03 03:36:15.134279 | orchestrator | Tuesday 03 February 2026 03:36:13 +0000 (0:00:00.166) 0:00:15.869 ****** 2026-02-03 03:36:15.134287 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134293 | orchestrator | 2026-02-03 03:36:15.134300 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-03 03:36:15.134307 | orchestrator | Tuesday 03 February 2026 03:36:14 +0000 (0:00:00.148) 0:00:16.018 ****** 2026-02-03 03:36:15.134314 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:15.134322 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:15.134329 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134336 | orchestrator | 2026-02-03 03:36:15.134342 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-03 03:36:15.134349 | orchestrator | Tuesday 03 February 2026 03:36:14 +0000 (0:00:00.165) 0:00:16.184 ****** 2026-02-03 03:36:15.134356 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:36:15.134363 | orchestrator | 2026-02-03 03:36:15.134369 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-03 03:36:15.134376 | orchestrator | Tuesday 03 February 2026 03:36:14 +0000 (0:00:00.164) 0:00:16.348 ****** 2026-02-03 03:36:15.134382 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:15.134389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:15.134395 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134402 | orchestrator | 2026-02-03 03:36:15.134408 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-03 03:36:15.134415 | orchestrator | Tuesday 03 February 2026 03:36:14 +0000 (0:00:00.190) 0:00:16.538 ****** 2026-02-03 03:36:15.134421 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:15.134428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:15.134434 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134440 | orchestrator | 2026-02-03 03:36:15.134447 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-03 03:36:15.134454 | orchestrator | Tuesday 03 February 2026 03:36:14 +0000 (0:00:00.178) 0:00:16.717 ****** 2026-02-03 03:36:15.134460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:15.134467 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:15.134478 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134484 | orchestrator | 2026-02-03 03:36:15.134491 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-03 03:36:15.134497 | orchestrator | Tuesday 03 February 2026 03:36:14 +0000 (0:00:00.165) 0:00:16.883 ****** 2026-02-03 03:36:15.134504 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:15.134510 | orchestrator | 2026-02-03 03:36:15.134517 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-03 03:36:15.134528 | orchestrator | Tuesday 03 February 2026 03:36:15 +0000 (0:00:00.161) 0:00:17.044 ****** 2026-02-03 03:36:22.187425 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187496 | orchestrator | 2026-02-03 03:36:22.187503 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-03 03:36:22.187508 | orchestrator | Tuesday 03 February 2026 03:36:15 +0000 (0:00:00.168) 0:00:17.212 ****** 2026-02-03 03:36:22.187513 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187517 | orchestrator | 2026-02-03 03:36:22.187522 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-03 03:36:22.187526 | orchestrator | Tuesday 03 February 2026 03:36:15 +0000 (0:00:00.368) 0:00:17.581 ****** 2026-02-03 03:36:22.187530 | orchestrator | ok: [testbed-node-3] => { 2026-02-03 03:36:22.187535 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-03 03:36:22.187539 | orchestrator | } 2026-02-03 03:36:22.187543 | orchestrator | 2026-02-03 03:36:22.187547 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-03 03:36:22.187551 | orchestrator | Tuesday 03 February 2026 03:36:15 +0000 (0:00:00.165) 0:00:17.747 ****** 2026-02-03 03:36:22.187555 | orchestrator | ok: [testbed-node-3] => { 2026-02-03 03:36:22.187559 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-03 03:36:22.187563 | orchestrator | } 2026-02-03 03:36:22.187566 | orchestrator | 2026-02-03 03:36:22.187571 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-03 03:36:22.187586 | orchestrator | Tuesday 03 February 2026 03:36:15 +0000 (0:00:00.162) 0:00:17.909 ****** 2026-02-03 03:36:22.187591 | orchestrator | ok: [testbed-node-3] => { 2026-02-03 03:36:22.187594 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-03 03:36:22.187599 | orchestrator | } 2026-02-03 03:36:22.187602 | orchestrator | 2026-02-03 03:36:22.187606 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-03 03:36:22.187610 | orchestrator | Tuesday 03 February 2026 03:36:16 +0000 (0:00:00.169) 0:00:18.079 ****** 2026-02-03 03:36:22.187614 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:36:22.187618 | orchestrator | 2026-02-03 03:36:22.187622 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-03 03:36:22.187626 | orchestrator | Tuesday 03 February 2026 03:36:16 +0000 (0:00:00.702) 0:00:18.781 ****** 2026-02-03 03:36:22.187629 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:36:22.187633 | orchestrator | 2026-02-03 03:36:22.187637 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-03 03:36:22.187641 | orchestrator | Tuesday 03 February 2026 03:36:17 +0000 (0:00:00.544) 0:00:19.326 ****** 2026-02-03 03:36:22.187645 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:36:22.187649 | orchestrator | 2026-02-03 03:36:22.187652 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-03 03:36:22.187656 | orchestrator | Tuesday 03 February 2026 03:36:17 +0000 (0:00:00.529) 0:00:19.855 ****** 2026-02-03 03:36:22.187660 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:36:22.187664 | orchestrator | 2026-02-03 03:36:22.187668 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-03 03:36:22.187672 | orchestrator | Tuesday 03 February 2026 03:36:18 +0000 (0:00:00.161) 0:00:20.017 ****** 2026-02-03 03:36:22.187675 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187679 | orchestrator | 2026-02-03 03:36:22.187683 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-03 03:36:22.187702 | orchestrator | Tuesday 03 February 2026 03:36:18 +0000 (0:00:00.113) 0:00:20.131 ****** 2026-02-03 03:36:22.187706 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187710 | orchestrator | 2026-02-03 03:36:22.187714 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-03 03:36:22.187718 | orchestrator | Tuesday 03 February 2026 03:36:18 +0000 (0:00:00.133) 0:00:20.265 ****** 2026-02-03 03:36:22.187722 | orchestrator | ok: [testbed-node-3] => { 2026-02-03 03:36:22.187726 | orchestrator |  "vgs_report": { 2026-02-03 03:36:22.187730 | orchestrator |  "vg": [] 2026-02-03 03:36:22.187734 | orchestrator |  } 2026-02-03 03:36:22.187738 | orchestrator | } 2026-02-03 03:36:22.187742 | orchestrator | 2026-02-03 03:36:22.187746 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-03 03:36:22.187750 | orchestrator | Tuesday 03 February 2026 03:36:18 +0000 (0:00:00.157) 0:00:20.422 ****** 2026-02-03 03:36:22.187753 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187757 | orchestrator | 2026-02-03 03:36:22.187761 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-03 03:36:22.187765 | orchestrator | Tuesday 03 February 2026 03:36:18 +0000 (0:00:00.153) 0:00:20.576 ****** 2026-02-03 03:36:22.187769 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187772 | orchestrator | 2026-02-03 03:36:22.187776 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-03 03:36:22.187780 | orchestrator | Tuesday 03 February 2026 03:36:19 +0000 (0:00:00.391) 0:00:20.967 ****** 2026-02-03 03:36:22.187784 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187787 | orchestrator | 2026-02-03 03:36:22.187791 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-03 03:36:22.187795 | orchestrator | Tuesday 03 February 2026 03:36:19 +0000 (0:00:00.155) 0:00:21.123 ****** 2026-02-03 03:36:22.187799 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187803 | orchestrator | 2026-02-03 03:36:22.187807 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-03 03:36:22.187810 | orchestrator | Tuesday 03 February 2026 03:36:19 +0000 (0:00:00.161) 0:00:21.284 ****** 2026-02-03 03:36:22.187814 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187818 | orchestrator | 2026-02-03 03:36:22.187822 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-03 03:36:22.187826 | orchestrator | Tuesday 03 February 2026 03:36:19 +0000 (0:00:00.165) 0:00:21.449 ****** 2026-02-03 03:36:22.187829 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187833 | orchestrator | 2026-02-03 03:36:22.187837 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-03 03:36:22.187841 | orchestrator | Tuesday 03 February 2026 03:36:19 +0000 (0:00:00.148) 0:00:21.598 ****** 2026-02-03 03:36:22.187844 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187848 | orchestrator | 2026-02-03 03:36:22.187852 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-03 03:36:22.187856 | orchestrator | Tuesday 03 February 2026 03:36:19 +0000 (0:00:00.145) 0:00:21.743 ****** 2026-02-03 03:36:22.187868 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187872 | orchestrator | 2026-02-03 03:36:22.187876 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-03 03:36:22.187880 | orchestrator | Tuesday 03 February 2026 03:36:19 +0000 (0:00:00.154) 0:00:21.898 ****** 2026-02-03 03:36:22.187884 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187888 | orchestrator | 2026-02-03 03:36:22.187892 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-03 03:36:22.187895 | orchestrator | Tuesday 03 February 2026 03:36:20 +0000 (0:00:00.145) 0:00:22.043 ****** 2026-02-03 03:36:22.187899 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187903 | orchestrator | 2026-02-03 03:36:22.187907 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-03 03:36:22.187911 | orchestrator | Tuesday 03 February 2026 03:36:20 +0000 (0:00:00.170) 0:00:22.214 ****** 2026-02-03 03:36:22.187918 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187922 | orchestrator | 2026-02-03 03:36:22.187926 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-03 03:36:22.187930 | orchestrator | Tuesday 03 February 2026 03:36:20 +0000 (0:00:00.159) 0:00:22.373 ****** 2026-02-03 03:36:22.187934 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187937 | orchestrator | 2026-02-03 03:36:22.187944 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-03 03:36:22.187948 | orchestrator | Tuesday 03 February 2026 03:36:20 +0000 (0:00:00.159) 0:00:22.533 ****** 2026-02-03 03:36:22.187952 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.187955 | orchestrator | 2026-02-03 03:36:22.187959 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-03 03:36:22.187963 | orchestrator | Tuesday 03 February 2026 03:36:20 +0000 (0:00:00.145) 0:00:22.678 ****** 2026-02-03 03:36:22.188015 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.188020 | orchestrator | 2026-02-03 03:36:22.188025 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-03 03:36:22.188029 | orchestrator | Tuesday 03 February 2026 03:36:21 +0000 (0:00:00.380) 0:00:23.059 ****** 2026-02-03 03:36:22.188035 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:22.188042 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:22.188048 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.188054 | orchestrator | 2026-02-03 03:36:22.188062 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-03 03:36:22.188068 | orchestrator | Tuesday 03 February 2026 03:36:21 +0000 (0:00:00.167) 0:00:23.227 ****** 2026-02-03 03:36:22.188073 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:22.188079 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:22.188084 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.188090 | orchestrator | 2026-02-03 03:36:22.188099 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-03 03:36:22.188107 | orchestrator | Tuesday 03 February 2026 03:36:21 +0000 (0:00:00.169) 0:00:23.396 ****** 2026-02-03 03:36:22.188112 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:22.188119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:22.188125 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.188130 | orchestrator | 2026-02-03 03:36:22.188136 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-03 03:36:22.188142 | orchestrator | Tuesday 03 February 2026 03:36:21 +0000 (0:00:00.194) 0:00:23.591 ****** 2026-02-03 03:36:22.188149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:22.188155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:22.188161 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.188168 | orchestrator | 2026-02-03 03:36:22.188174 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-03 03:36:22.188180 | orchestrator | Tuesday 03 February 2026 03:36:21 +0000 (0:00:00.172) 0:00:23.763 ****** 2026-02-03 03:36:22.188192 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:22.188198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:22.188204 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:22.188211 | orchestrator | 2026-02-03 03:36:22.188217 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-03 03:36:22.188223 | orchestrator | Tuesday 03 February 2026 03:36:22 +0000 (0:00:00.168) 0:00:23.931 ****** 2026-02-03 03:36:22.188235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:27.943960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:27.944068 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:27.944079 | orchestrator | 2026-02-03 03:36:27.944087 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-03 03:36:27.944094 | orchestrator | Tuesday 03 February 2026 03:36:22 +0000 (0:00:00.169) 0:00:24.101 ****** 2026-02-03 03:36:27.944100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:27.944107 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:27.944113 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:27.944118 | orchestrator | 2026-02-03 03:36:27.944138 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-03 03:36:27.944144 | orchestrator | Tuesday 03 February 2026 03:36:22 +0000 (0:00:00.175) 0:00:24.276 ****** 2026-02-03 03:36:27.944150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:27.944155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:27.944161 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:27.944167 | orchestrator | 2026-02-03 03:36:27.944172 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-03 03:36:27.944178 | orchestrator | Tuesday 03 February 2026 03:36:22 +0000 (0:00:00.167) 0:00:24.443 ****** 2026-02-03 03:36:27.944183 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:36:27.944190 | orchestrator | 2026-02-03 03:36:27.944196 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-03 03:36:27.944201 | orchestrator | Tuesday 03 February 2026 03:36:23 +0000 (0:00:00.606) 0:00:25.050 ****** 2026-02-03 03:36:27.944207 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:36:27.944212 | orchestrator | 2026-02-03 03:36:27.944218 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-03 03:36:27.944224 | orchestrator | Tuesday 03 February 2026 03:36:23 +0000 (0:00:00.547) 0:00:25.598 ****** 2026-02-03 03:36:27.944229 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:36:27.944235 | orchestrator | 2026-02-03 03:36:27.944240 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-03 03:36:27.944246 | orchestrator | Tuesday 03 February 2026 03:36:23 +0000 (0:00:00.162) 0:00:25.760 ****** 2026-02-03 03:36:27.944252 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'vg_name': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}) 2026-02-03 03:36:27.944259 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'vg_name': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}) 2026-02-03 03:36:27.944281 | orchestrator | 2026-02-03 03:36:27.944287 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-03 03:36:27.944292 | orchestrator | Tuesday 03 February 2026 03:36:24 +0000 (0:00:00.189) 0:00:25.949 ****** 2026-02-03 03:36:27.944298 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:27.944304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:27.944309 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:27.944315 | orchestrator | 2026-02-03 03:36:27.944320 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-03 03:36:27.944326 | orchestrator | Tuesday 03 February 2026 03:36:24 +0000 (0:00:00.407) 0:00:26.357 ****** 2026-02-03 03:36:27.944331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:27.944337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:27.944343 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:27.944348 | orchestrator | 2026-02-03 03:36:27.944354 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-03 03:36:27.944359 | orchestrator | Tuesday 03 February 2026 03:36:24 +0000 (0:00:00.177) 0:00:26.534 ****** 2026-02-03 03:36:27.944365 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 03:36:27.944371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 03:36:27.944376 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:36:27.944381 | orchestrator | 2026-02-03 03:36:27.944387 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-03 03:36:27.944393 | orchestrator | Tuesday 03 February 2026 03:36:24 +0000 (0:00:00.175) 0:00:26.710 ****** 2026-02-03 03:36:27.944411 | orchestrator | ok: [testbed-node-3] => { 2026-02-03 03:36:27.944417 | orchestrator |  "lvm_report": { 2026-02-03 03:36:27.944423 | orchestrator |  "lv": [ 2026-02-03 03:36:27.944429 | orchestrator |  { 2026-02-03 03:36:27.944434 | orchestrator |  "lv_name": "osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29", 2026-02-03 03:36:27.944441 | orchestrator |  "vg_name": "ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29" 2026-02-03 03:36:27.944447 | orchestrator |  }, 2026-02-03 03:36:27.944452 | orchestrator |  { 2026-02-03 03:36:27.944458 | orchestrator |  "lv_name": "osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd", 2026-02-03 03:36:27.944463 | orchestrator |  "vg_name": "ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd" 2026-02-03 03:36:27.944469 | orchestrator |  } 2026-02-03 03:36:27.944475 | orchestrator |  ], 2026-02-03 03:36:27.944484 | orchestrator |  "pv": [ 2026-02-03 03:36:27.944492 | orchestrator |  { 2026-02-03 03:36:27.944501 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-03 03:36:27.944511 | orchestrator |  "vg_name": "ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29" 2026-02-03 03:36:27.944520 | orchestrator |  }, 2026-02-03 03:36:27.944530 | orchestrator |  { 2026-02-03 03:36:27.944544 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-03 03:36:27.944554 | orchestrator |  "vg_name": "ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd" 2026-02-03 03:36:27.944563 | orchestrator |  } 2026-02-03 03:36:27.944571 | orchestrator |  ] 2026-02-03 03:36:27.944581 | orchestrator |  } 2026-02-03 03:36:27.944590 | orchestrator | } 2026-02-03 03:36:27.944608 | orchestrator | 2026-02-03 03:36:27.944617 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-03 03:36:27.944626 | orchestrator | 2026-02-03 03:36:27.944635 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-03 03:36:27.944644 | orchestrator | Tuesday 03 February 2026 03:36:25 +0000 (0:00:00.348) 0:00:27.058 ****** 2026-02-03 03:36:27.944654 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-03 03:36:27.944664 | orchestrator | 2026-02-03 03:36:27.944673 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-03 03:36:27.944683 | orchestrator | Tuesday 03 February 2026 03:36:25 +0000 (0:00:00.277) 0:00:27.336 ****** 2026-02-03 03:36:27.944689 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:36:27.944696 | orchestrator | 2026-02-03 03:36:27.944702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:27.944709 | orchestrator | Tuesday 03 February 2026 03:36:25 +0000 (0:00:00.247) 0:00:27.583 ****** 2026-02-03 03:36:27.944716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-03 03:36:27.944722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-03 03:36:27.944729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-03 03:36:27.944735 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-03 03:36:27.944742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-03 03:36:27.944748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-03 03:36:27.944755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-03 03:36:27.944761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-03 03:36:27.944768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-03 03:36:27.944774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-03 03:36:27.944781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-03 03:36:27.944787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-03 03:36:27.944794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-03 03:36:27.944801 | orchestrator | 2026-02-03 03:36:27.944807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:27.944813 | orchestrator | Tuesday 03 February 2026 03:36:26 +0000 (0:00:00.445) 0:00:28.029 ****** 2026-02-03 03:36:27.944820 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:27.944826 | orchestrator | 2026-02-03 03:36:27.944833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:27.944840 | orchestrator | Tuesday 03 February 2026 03:36:26 +0000 (0:00:00.227) 0:00:28.256 ****** 2026-02-03 03:36:27.944846 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:27.944852 | orchestrator | 2026-02-03 03:36:27.944859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:27.944864 | orchestrator | Tuesday 03 February 2026 03:36:27 +0000 (0:00:00.663) 0:00:28.920 ****** 2026-02-03 03:36:27.944870 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:27.944875 | orchestrator | 2026-02-03 03:36:27.944881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:27.944886 | orchestrator | Tuesday 03 February 2026 03:36:27 +0000 (0:00:00.235) 0:00:29.155 ****** 2026-02-03 03:36:27.944892 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:27.944897 | orchestrator | 2026-02-03 03:36:27.944903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:27.944908 | orchestrator | Tuesday 03 February 2026 03:36:27 +0000 (0:00:00.244) 0:00:29.400 ****** 2026-02-03 03:36:27.944920 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:27.944925 | orchestrator | 2026-02-03 03:36:27.944931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:27.944936 | orchestrator | Tuesday 03 February 2026 03:36:27 +0000 (0:00:00.221) 0:00:29.621 ****** 2026-02-03 03:36:27.944942 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:27.944947 | orchestrator | 2026-02-03 03:36:27.944960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:39.941577 | orchestrator | Tuesday 03 February 2026 03:36:27 +0000 (0:00:00.233) 0:00:29.855 ****** 2026-02-03 03:36:39.941702 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.941721 | orchestrator | 2026-02-03 03:36:39.941738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:39.941752 | orchestrator | Tuesday 03 February 2026 03:36:28 +0000 (0:00:00.231) 0:00:30.086 ****** 2026-02-03 03:36:39.941766 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.941780 | orchestrator | 2026-02-03 03:36:39.941794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:39.941808 | orchestrator | Tuesday 03 February 2026 03:36:28 +0000 (0:00:00.248) 0:00:30.335 ****** 2026-02-03 03:36:39.941821 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8) 2026-02-03 03:36:39.941836 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8) 2026-02-03 03:36:39.941850 | orchestrator | 2026-02-03 03:36:39.941881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:39.941895 | orchestrator | Tuesday 03 February 2026 03:36:28 +0000 (0:00:00.447) 0:00:30.783 ****** 2026-02-03 03:36:39.941907 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a) 2026-02-03 03:36:39.941921 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a) 2026-02-03 03:36:39.941934 | orchestrator | 2026-02-03 03:36:39.941946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:39.942070 | orchestrator | Tuesday 03 February 2026 03:36:29 +0000 (0:00:00.457) 0:00:31.241 ****** 2026-02-03 03:36:39.942087 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd) 2026-02-03 03:36:39.942102 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd) 2026-02-03 03:36:39.942118 | orchestrator | 2026-02-03 03:36:39.942133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:39.942148 | orchestrator | Tuesday 03 February 2026 03:36:30 +0000 (0:00:00.795) 0:00:32.036 ****** 2026-02-03 03:36:39.942163 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be) 2026-02-03 03:36:39.942179 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be) 2026-02-03 03:36:39.942194 | orchestrator | 2026-02-03 03:36:39.942209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:39.942225 | orchestrator | Tuesday 03 February 2026 03:36:31 +0000 (0:00:01.042) 0:00:33.078 ****** 2026-02-03 03:36:39.942239 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-03 03:36:39.942255 | orchestrator | 2026-02-03 03:36:39.942271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942287 | orchestrator | Tuesday 03 February 2026 03:36:31 +0000 (0:00:00.379) 0:00:33.458 ****** 2026-02-03 03:36:39.942303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-03 03:36:39.942320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-03 03:36:39.942335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-03 03:36:39.942379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-03 03:36:39.942395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-03 03:36:39.942410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-03 03:36:39.942423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-03 03:36:39.942437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-03 03:36:39.942450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-03 03:36:39.942463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-03 03:36:39.942476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-03 03:36:39.942489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-03 03:36:39.942502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-03 03:36:39.942516 | orchestrator | 2026-02-03 03:36:39.942530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942544 | orchestrator | Tuesday 03 February 2026 03:36:31 +0000 (0:00:00.451) 0:00:33.910 ****** 2026-02-03 03:36:39.942556 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.942570 | orchestrator | 2026-02-03 03:36:39.942583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942596 | orchestrator | Tuesday 03 February 2026 03:36:32 +0000 (0:00:00.219) 0:00:34.130 ****** 2026-02-03 03:36:39.942610 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.942623 | orchestrator | 2026-02-03 03:36:39.942637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942650 | orchestrator | Tuesday 03 February 2026 03:36:32 +0000 (0:00:00.223) 0:00:34.353 ****** 2026-02-03 03:36:39.942664 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.942674 | orchestrator | 2026-02-03 03:36:39.942701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942709 | orchestrator | Tuesday 03 February 2026 03:36:32 +0000 (0:00:00.297) 0:00:34.650 ****** 2026-02-03 03:36:39.942717 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.942725 | orchestrator | 2026-02-03 03:36:39.942733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942741 | orchestrator | Tuesday 03 February 2026 03:36:32 +0000 (0:00:00.236) 0:00:34.887 ****** 2026-02-03 03:36:39.942748 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.942756 | orchestrator | 2026-02-03 03:36:39.942764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942773 | orchestrator | Tuesday 03 February 2026 03:36:33 +0000 (0:00:00.259) 0:00:35.147 ****** 2026-02-03 03:36:39.942780 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.942788 | orchestrator | 2026-02-03 03:36:39.942796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942804 | orchestrator | Tuesday 03 February 2026 03:36:33 +0000 (0:00:00.232) 0:00:35.379 ****** 2026-02-03 03:36:39.942820 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.942828 | orchestrator | 2026-02-03 03:36:39.942836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942844 | orchestrator | Tuesday 03 February 2026 03:36:33 +0000 (0:00:00.240) 0:00:35.619 ****** 2026-02-03 03:36:39.942852 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.942859 | orchestrator | 2026-02-03 03:36:39.942867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942875 | orchestrator | Tuesday 03 February 2026 03:36:34 +0000 (0:00:00.709) 0:00:36.328 ****** 2026-02-03 03:36:39.942882 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-03 03:36:39.942899 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-03 03:36:39.942908 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-03 03:36:39.942916 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-03 03:36:39.942924 | orchestrator | 2026-02-03 03:36:39.942932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942939 | orchestrator | Tuesday 03 February 2026 03:36:35 +0000 (0:00:00.758) 0:00:37.087 ****** 2026-02-03 03:36:39.942947 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.942980 | orchestrator | 2026-02-03 03:36:39.942989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.942997 | orchestrator | Tuesday 03 February 2026 03:36:35 +0000 (0:00:00.244) 0:00:37.332 ****** 2026-02-03 03:36:39.943004 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.943012 | orchestrator | 2026-02-03 03:36:39.943020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.943028 | orchestrator | Tuesday 03 February 2026 03:36:35 +0000 (0:00:00.244) 0:00:37.576 ****** 2026-02-03 03:36:39.943036 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.943044 | orchestrator | 2026-02-03 03:36:39.943052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:39.943060 | orchestrator | Tuesday 03 February 2026 03:36:35 +0000 (0:00:00.240) 0:00:37.817 ****** 2026-02-03 03:36:39.943068 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.943076 | orchestrator | 2026-02-03 03:36:39.943083 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-03 03:36:39.943091 | orchestrator | Tuesday 03 February 2026 03:36:36 +0000 (0:00:00.233) 0:00:38.051 ****** 2026-02-03 03:36:39.943099 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.943107 | orchestrator | 2026-02-03 03:36:39.943115 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-03 03:36:39.943122 | orchestrator | Tuesday 03 February 2026 03:36:36 +0000 (0:00:00.136) 0:00:38.187 ****** 2026-02-03 03:36:39.943130 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '121565c5-01e5-5794-959e-075d91e35362'}}) 2026-02-03 03:36:39.943139 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1a37b12a-042e-589b-8d7d-13944ef33291'}}) 2026-02-03 03:36:39.943147 | orchestrator | 2026-02-03 03:36:39.943155 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-03 03:36:39.943163 | orchestrator | Tuesday 03 February 2026 03:36:36 +0000 (0:00:00.226) 0:00:38.413 ****** 2026-02-03 03:36:39.943172 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'}) 2026-02-03 03:36:39.943182 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'}) 2026-02-03 03:36:39.943189 | orchestrator | 2026-02-03 03:36:39.943197 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-03 03:36:39.943205 | orchestrator | Tuesday 03 February 2026 03:36:38 +0000 (0:00:01.876) 0:00:40.290 ****** 2026-02-03 03:36:39.943217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:39.943230 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:39.943238 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:39.943246 | orchestrator | 2026-02-03 03:36:39.943255 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-03 03:36:39.943268 | orchestrator | Tuesday 03 February 2026 03:36:38 +0000 (0:00:00.183) 0:00:40.474 ****** 2026-02-03 03:36:39.943280 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'}) 2026-02-03 03:36:39.943319 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'}) 2026-02-03 03:36:46.411127 | orchestrator | 2026-02-03 03:36:46.411266 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-03 03:36:46.411293 | orchestrator | Tuesday 03 February 2026 03:36:39 +0000 (0:00:01.373) 0:00:41.847 ****** 2026-02-03 03:36:46.411311 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:46.411330 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:46.411346 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.411364 | orchestrator | 2026-02-03 03:36:46.411398 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-03 03:36:46.411409 | orchestrator | Tuesday 03 February 2026 03:36:40 +0000 (0:00:00.410) 0:00:42.257 ****** 2026-02-03 03:36:46.411420 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.411430 | orchestrator | 2026-02-03 03:36:46.411440 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-03 03:36:46.411450 | orchestrator | Tuesday 03 February 2026 03:36:40 +0000 (0:00:00.181) 0:00:42.439 ****** 2026-02-03 03:36:46.411460 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:46.411470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:46.411480 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.411490 | orchestrator | 2026-02-03 03:36:46.411500 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-03 03:36:46.411510 | orchestrator | Tuesday 03 February 2026 03:36:40 +0000 (0:00:00.169) 0:00:42.608 ****** 2026-02-03 03:36:46.411520 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.411529 | orchestrator | 2026-02-03 03:36:46.411539 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-03 03:36:46.411549 | orchestrator | Tuesday 03 February 2026 03:36:40 +0000 (0:00:00.165) 0:00:42.774 ****** 2026-02-03 03:36:46.411559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:46.411577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:46.411593 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.411612 | orchestrator | 2026-02-03 03:36:46.411630 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-03 03:36:46.411648 | orchestrator | Tuesday 03 February 2026 03:36:41 +0000 (0:00:00.178) 0:00:42.952 ****** 2026-02-03 03:36:46.411665 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.411681 | orchestrator | 2026-02-03 03:36:46.411699 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-03 03:36:46.411717 | orchestrator | Tuesday 03 February 2026 03:36:41 +0000 (0:00:00.175) 0:00:43.127 ****** 2026-02-03 03:36:46.411734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:46.411752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:46.411770 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.411788 | orchestrator | 2026-02-03 03:36:46.411803 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-03 03:36:46.411837 | orchestrator | Tuesday 03 February 2026 03:36:41 +0000 (0:00:00.159) 0:00:43.287 ****** 2026-02-03 03:36:46.411850 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:36:46.411866 | orchestrator | 2026-02-03 03:36:46.411883 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-03 03:36:46.411900 | orchestrator | Tuesday 03 February 2026 03:36:41 +0000 (0:00:00.147) 0:00:43.434 ****** 2026-02-03 03:36:46.411917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:46.411933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:46.411979 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.411996 | orchestrator | 2026-02-03 03:36:46.412013 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-03 03:36:46.412029 | orchestrator | Tuesday 03 February 2026 03:36:41 +0000 (0:00:00.172) 0:00:43.607 ****** 2026-02-03 03:36:46.412045 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:46.412061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:46.412078 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.412095 | orchestrator | 2026-02-03 03:36:46.412111 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-03 03:36:46.412153 | orchestrator | Tuesday 03 February 2026 03:36:41 +0000 (0:00:00.169) 0:00:43.776 ****** 2026-02-03 03:36:46.412173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:46.412190 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:46.412207 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.412224 | orchestrator | 2026-02-03 03:36:46.412241 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-03 03:36:46.412259 | orchestrator | Tuesday 03 February 2026 03:36:42 +0000 (0:00:00.180) 0:00:43.956 ****** 2026-02-03 03:36:46.412278 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.412288 | orchestrator | 2026-02-03 03:36:46.412298 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-03 03:36:46.412308 | orchestrator | Tuesday 03 February 2026 03:36:42 +0000 (0:00:00.379) 0:00:44.336 ****** 2026-02-03 03:36:46.412317 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.412327 | orchestrator | 2026-02-03 03:36:46.412337 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-03 03:36:46.412346 | orchestrator | Tuesday 03 February 2026 03:36:42 +0000 (0:00:00.156) 0:00:44.492 ****** 2026-02-03 03:36:46.412356 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.412365 | orchestrator | 2026-02-03 03:36:46.412375 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-03 03:36:46.412385 | orchestrator | Tuesday 03 February 2026 03:36:42 +0000 (0:00:00.173) 0:00:44.665 ****** 2026-02-03 03:36:46.412394 | orchestrator | ok: [testbed-node-4] => { 2026-02-03 03:36:46.412404 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-03 03:36:46.412414 | orchestrator | } 2026-02-03 03:36:46.412424 | orchestrator | 2026-02-03 03:36:46.412434 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-03 03:36:46.412444 | orchestrator | Tuesday 03 February 2026 03:36:42 +0000 (0:00:00.149) 0:00:44.815 ****** 2026-02-03 03:36:46.412453 | orchestrator | ok: [testbed-node-4] => { 2026-02-03 03:36:46.412463 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-03 03:36:46.412484 | orchestrator | } 2026-02-03 03:36:46.412501 | orchestrator | 2026-02-03 03:36:46.412517 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-03 03:36:46.412535 | orchestrator | Tuesday 03 February 2026 03:36:43 +0000 (0:00:00.159) 0:00:44.974 ****** 2026-02-03 03:36:46.412552 | orchestrator | ok: [testbed-node-4] => { 2026-02-03 03:36:46.412568 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-03 03:36:46.412585 | orchestrator | } 2026-02-03 03:36:46.412602 | orchestrator | 2026-02-03 03:36:46.412615 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-03 03:36:46.412630 | orchestrator | Tuesday 03 February 2026 03:36:43 +0000 (0:00:00.195) 0:00:45.170 ****** 2026-02-03 03:36:46.412647 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:36:46.412665 | orchestrator | 2026-02-03 03:36:46.412675 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-03 03:36:46.412685 | orchestrator | Tuesday 03 February 2026 03:36:43 +0000 (0:00:00.524) 0:00:45.694 ****** 2026-02-03 03:36:46.412695 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:36:46.412705 | orchestrator | 2026-02-03 03:36:46.412714 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-03 03:36:46.412730 | orchestrator | Tuesday 03 February 2026 03:36:44 +0000 (0:00:00.532) 0:00:46.226 ****** 2026-02-03 03:36:46.412746 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:36:46.412761 | orchestrator | 2026-02-03 03:36:46.412777 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-03 03:36:46.412792 | orchestrator | Tuesday 03 February 2026 03:36:44 +0000 (0:00:00.543) 0:00:46.770 ****** 2026-02-03 03:36:46.412807 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:36:46.412822 | orchestrator | 2026-02-03 03:36:46.412837 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-03 03:36:46.412852 | orchestrator | Tuesday 03 February 2026 03:36:45 +0000 (0:00:00.174) 0:00:46.945 ****** 2026-02-03 03:36:46.412867 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.412882 | orchestrator | 2026-02-03 03:36:46.412898 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-03 03:36:46.412914 | orchestrator | Tuesday 03 February 2026 03:36:45 +0000 (0:00:00.112) 0:00:47.058 ****** 2026-02-03 03:36:46.412929 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.412964 | orchestrator | 2026-02-03 03:36:46.412984 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-03 03:36:46.413000 | orchestrator | Tuesday 03 February 2026 03:36:45 +0000 (0:00:00.376) 0:00:47.434 ****** 2026-02-03 03:36:46.413010 | orchestrator | ok: [testbed-node-4] => { 2026-02-03 03:36:46.413019 | orchestrator |  "vgs_report": { 2026-02-03 03:36:46.413030 | orchestrator |  "vg": [] 2026-02-03 03:36:46.413040 | orchestrator |  } 2026-02-03 03:36:46.413050 | orchestrator | } 2026-02-03 03:36:46.413060 | orchestrator | 2026-02-03 03:36:46.413069 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-03 03:36:46.413079 | orchestrator | Tuesday 03 February 2026 03:36:45 +0000 (0:00:00.207) 0:00:47.642 ****** 2026-02-03 03:36:46.413088 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.413098 | orchestrator | 2026-02-03 03:36:46.413108 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-03 03:36:46.413117 | orchestrator | Tuesday 03 February 2026 03:36:45 +0000 (0:00:00.204) 0:00:47.846 ****** 2026-02-03 03:36:46.413127 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.413136 | orchestrator | 2026-02-03 03:36:46.413146 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-03 03:36:46.413155 | orchestrator | Tuesday 03 February 2026 03:36:46 +0000 (0:00:00.173) 0:00:48.020 ****** 2026-02-03 03:36:46.413165 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.413174 | orchestrator | 2026-02-03 03:36:46.413184 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-03 03:36:46.413194 | orchestrator | Tuesday 03 February 2026 03:36:46 +0000 (0:00:00.153) 0:00:48.173 ****** 2026-02-03 03:36:46.413213 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:46.413223 | orchestrator | 2026-02-03 03:36:46.413244 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-03 03:36:51.634892 | orchestrator | Tuesday 03 February 2026 03:36:46 +0000 (0:00:00.148) 0:00:48.321 ****** 2026-02-03 03:36:51.635038 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635046 | orchestrator | 2026-02-03 03:36:51.635053 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-03 03:36:51.635057 | orchestrator | Tuesday 03 February 2026 03:36:46 +0000 (0:00:00.147) 0:00:48.468 ****** 2026-02-03 03:36:51.635062 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635066 | orchestrator | 2026-02-03 03:36:51.635070 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-03 03:36:51.635075 | orchestrator | Tuesday 03 February 2026 03:36:46 +0000 (0:00:00.160) 0:00:48.628 ****** 2026-02-03 03:36:51.635078 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635082 | orchestrator | 2026-02-03 03:36:51.635103 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-03 03:36:51.635107 | orchestrator | Tuesday 03 February 2026 03:36:46 +0000 (0:00:00.156) 0:00:48.785 ****** 2026-02-03 03:36:51.635111 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635115 | orchestrator | 2026-02-03 03:36:51.635119 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-03 03:36:51.635123 | orchestrator | Tuesday 03 February 2026 03:36:47 +0000 (0:00:00.147) 0:00:48.933 ****** 2026-02-03 03:36:51.635127 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635131 | orchestrator | 2026-02-03 03:36:51.635134 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-03 03:36:51.635139 | orchestrator | Tuesday 03 February 2026 03:36:47 +0000 (0:00:00.149) 0:00:49.082 ****** 2026-02-03 03:36:51.635142 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635147 | orchestrator | 2026-02-03 03:36:51.635150 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-03 03:36:51.635155 | orchestrator | Tuesday 03 February 2026 03:36:47 +0000 (0:00:00.386) 0:00:49.469 ****** 2026-02-03 03:36:51.635159 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635163 | orchestrator | 2026-02-03 03:36:51.635167 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-03 03:36:51.635171 | orchestrator | Tuesday 03 February 2026 03:36:47 +0000 (0:00:00.144) 0:00:49.613 ****** 2026-02-03 03:36:51.635175 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635179 | orchestrator | 2026-02-03 03:36:51.635183 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-03 03:36:51.635187 | orchestrator | Tuesday 03 February 2026 03:36:47 +0000 (0:00:00.154) 0:00:49.767 ****** 2026-02-03 03:36:51.635190 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635194 | orchestrator | 2026-02-03 03:36:51.635198 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-03 03:36:51.635202 | orchestrator | Tuesday 03 February 2026 03:36:48 +0000 (0:00:00.157) 0:00:49.925 ****** 2026-02-03 03:36:51.635206 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635210 | orchestrator | 2026-02-03 03:36:51.635214 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-03 03:36:51.635218 | orchestrator | Tuesday 03 February 2026 03:36:48 +0000 (0:00:00.163) 0:00:50.088 ****** 2026-02-03 03:36:51.635226 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635235 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:51.635241 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635247 | orchestrator | 2026-02-03 03:36:51.635253 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-03 03:36:51.635281 | orchestrator | Tuesday 03 February 2026 03:36:48 +0000 (0:00:00.170) 0:00:50.259 ****** 2026-02-03 03:36:51.635288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:51.635298 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635301 | orchestrator | 2026-02-03 03:36:51.635305 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-03 03:36:51.635309 | orchestrator | Tuesday 03 February 2026 03:36:48 +0000 (0:00:00.157) 0:00:50.416 ****** 2026-02-03 03:36:51.635313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635317 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:51.635321 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635326 | orchestrator | 2026-02-03 03:36:51.635330 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-03 03:36:51.635333 | orchestrator | Tuesday 03 February 2026 03:36:48 +0000 (0:00:00.169) 0:00:50.585 ****** 2026-02-03 03:36:51.635337 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:51.635345 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635349 | orchestrator | 2026-02-03 03:36:51.635368 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-03 03:36:51.635372 | orchestrator | Tuesday 03 February 2026 03:36:48 +0000 (0:00:00.169) 0:00:50.754 ****** 2026-02-03 03:36:51.635376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635380 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:51.635384 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635388 | orchestrator | 2026-02-03 03:36:51.635395 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-03 03:36:51.635399 | orchestrator | Tuesday 03 February 2026 03:36:49 +0000 (0:00:00.188) 0:00:50.943 ****** 2026-02-03 03:36:51.635403 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:51.635410 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635414 | orchestrator | 2026-02-03 03:36:51.635418 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-03 03:36:51.635422 | orchestrator | Tuesday 03 February 2026 03:36:49 +0000 (0:00:00.177) 0:00:51.120 ****** 2026-02-03 03:36:51.635427 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635431 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:51.635436 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635445 | orchestrator | 2026-02-03 03:36:51.635451 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-03 03:36:51.635458 | orchestrator | Tuesday 03 February 2026 03:36:49 +0000 (0:00:00.425) 0:00:51.545 ****** 2026-02-03 03:36:51.635464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:51.635476 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635482 | orchestrator | 2026-02-03 03:36:51.635489 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-03 03:36:51.635495 | orchestrator | Tuesday 03 February 2026 03:36:49 +0000 (0:00:00.174) 0:00:51.720 ****** 2026-02-03 03:36:51.635500 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:36:51.635507 | orchestrator | 2026-02-03 03:36:51.635512 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-03 03:36:51.635518 | orchestrator | Tuesday 03 February 2026 03:36:50 +0000 (0:00:00.545) 0:00:52.265 ****** 2026-02-03 03:36:51.635524 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:36:51.635530 | orchestrator | 2026-02-03 03:36:51.635536 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-03 03:36:51.635543 | orchestrator | Tuesday 03 February 2026 03:36:50 +0000 (0:00:00.589) 0:00:52.855 ****** 2026-02-03 03:36:51.635548 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:36:51.635555 | orchestrator | 2026-02-03 03:36:51.635561 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-03 03:36:51.635568 | orchestrator | Tuesday 03 February 2026 03:36:51 +0000 (0:00:00.150) 0:00:53.005 ****** 2026-02-03 03:36:51.635576 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'vg_name': 'ceph-121565c5-01e5-5794-959e-075d91e35362'}) 2026-02-03 03:36:51.635583 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'vg_name': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'}) 2026-02-03 03:36:51.635589 | orchestrator | 2026-02-03 03:36:51.635595 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-03 03:36:51.635601 | orchestrator | Tuesday 03 February 2026 03:36:51 +0000 (0:00:00.197) 0:00:53.203 ****** 2026-02-03 03:36:51.635607 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:51.635620 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:51.635626 | orchestrator | 2026-02-03 03:36:51.635632 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-03 03:36:51.635638 | orchestrator | Tuesday 03 February 2026 03:36:51 +0000 (0:00:00.164) 0:00:53.368 ****** 2026-02-03 03:36:51.635645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:51.635659 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:58.721330 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:58.721455 | orchestrator | 2026-02-03 03:36:58.721469 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-03 03:36:58.721478 | orchestrator | Tuesday 03 February 2026 03:36:51 +0000 (0:00:00.180) 0:00:53.548 ****** 2026-02-03 03:36:58.721487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 03:36:58.721540 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 03:36:58.721548 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:36:58.721554 | orchestrator | 2026-02-03 03:36:58.721561 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-03 03:36:58.721568 | orchestrator | Tuesday 03 February 2026 03:36:51 +0000 (0:00:00.181) 0:00:53.729 ****** 2026-02-03 03:36:58.721574 | orchestrator | ok: [testbed-node-4] => { 2026-02-03 03:36:58.721580 | orchestrator |  "lvm_report": { 2026-02-03 03:36:58.721589 | orchestrator |  "lv": [ 2026-02-03 03:36:58.721596 | orchestrator |  { 2026-02-03 03:36:58.721603 | orchestrator |  "lv_name": "osd-block-121565c5-01e5-5794-959e-075d91e35362", 2026-02-03 03:36:58.721610 | orchestrator |  "vg_name": "ceph-121565c5-01e5-5794-959e-075d91e35362" 2026-02-03 03:36:58.721617 | orchestrator |  }, 2026-02-03 03:36:58.721623 | orchestrator |  { 2026-02-03 03:36:58.721629 | orchestrator |  "lv_name": "osd-block-1a37b12a-042e-589b-8d7d-13944ef33291", 2026-02-03 03:36:58.721635 | orchestrator |  "vg_name": "ceph-1a37b12a-042e-589b-8d7d-13944ef33291" 2026-02-03 03:36:58.721641 | orchestrator |  } 2026-02-03 03:36:58.721648 | orchestrator |  ], 2026-02-03 03:36:58.721654 | orchestrator |  "pv": [ 2026-02-03 03:36:58.721661 | orchestrator |  { 2026-02-03 03:36:58.721667 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-03 03:36:58.721673 | orchestrator |  "vg_name": "ceph-121565c5-01e5-5794-959e-075d91e35362" 2026-02-03 03:36:58.721680 | orchestrator |  }, 2026-02-03 03:36:58.721687 | orchestrator |  { 2026-02-03 03:36:58.721693 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-03 03:36:58.721699 | orchestrator |  "vg_name": "ceph-1a37b12a-042e-589b-8d7d-13944ef33291" 2026-02-03 03:36:58.721706 | orchestrator |  } 2026-02-03 03:36:58.721712 | orchestrator |  ] 2026-02-03 03:36:58.721718 | orchestrator |  } 2026-02-03 03:36:58.721725 | orchestrator | } 2026-02-03 03:36:58.721731 | orchestrator | 2026-02-03 03:36:58.721738 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-03 03:36:58.721744 | orchestrator | 2026-02-03 03:36:58.721750 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-03 03:36:58.721756 | orchestrator | Tuesday 03 February 2026 03:36:52 +0000 (0:00:00.305) 0:00:54.034 ****** 2026-02-03 03:36:58.721762 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-03 03:36:58.721768 | orchestrator | 2026-02-03 03:36:58.721774 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-03 03:36:58.721780 | orchestrator | Tuesday 03 February 2026 03:36:52 +0000 (0:00:00.760) 0:00:54.794 ****** 2026-02-03 03:36:58.721785 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:36:58.721792 | orchestrator | 2026-02-03 03:36:58.721798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.721805 | orchestrator | Tuesday 03 February 2026 03:36:53 +0000 (0:00:00.300) 0:00:55.095 ****** 2026-02-03 03:36:58.721811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-03 03:36:58.721817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-03 03:36:58.721823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-03 03:36:58.721829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-03 03:36:58.721835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-03 03:36:58.721842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-03 03:36:58.721848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-03 03:36:58.721860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-03 03:36:58.721867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-03 03:36:58.721873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-03 03:36:58.721880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-03 03:36:58.721886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-03 03:36:58.721892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-03 03:36:58.721899 | orchestrator | 2026-02-03 03:36:58.721905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.721911 | orchestrator | Tuesday 03 February 2026 03:36:53 +0000 (0:00:00.474) 0:00:55.569 ****** 2026-02-03 03:36:58.721918 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:36:58.721924 | orchestrator | 2026-02-03 03:36:58.721945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.721952 | orchestrator | Tuesday 03 February 2026 03:36:53 +0000 (0:00:00.252) 0:00:55.822 ****** 2026-02-03 03:36:58.721958 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:36:58.721965 | orchestrator | 2026-02-03 03:36:58.721972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.721999 | orchestrator | Tuesday 03 February 2026 03:36:54 +0000 (0:00:00.209) 0:00:56.032 ****** 2026-02-03 03:36:58.722006 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:36:58.722083 | orchestrator | 2026-02-03 03:36:58.722091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722096 | orchestrator | Tuesday 03 February 2026 03:36:54 +0000 (0:00:00.227) 0:00:56.259 ****** 2026-02-03 03:36:58.722100 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:36:58.722105 | orchestrator | 2026-02-03 03:36:58.722109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722115 | orchestrator | Tuesday 03 February 2026 03:36:54 +0000 (0:00:00.213) 0:00:56.473 ****** 2026-02-03 03:36:58.722120 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:36:58.722124 | orchestrator | 2026-02-03 03:36:58.722129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722134 | orchestrator | Tuesday 03 February 2026 03:36:54 +0000 (0:00:00.240) 0:00:56.714 ****** 2026-02-03 03:36:58.722138 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:36:58.722143 | orchestrator | 2026-02-03 03:36:58.722147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722152 | orchestrator | Tuesday 03 February 2026 03:36:55 +0000 (0:00:00.208) 0:00:56.923 ****** 2026-02-03 03:36:58.722157 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:36:58.722161 | orchestrator | 2026-02-03 03:36:58.722165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722170 | orchestrator | Tuesday 03 February 2026 03:36:55 +0000 (0:00:00.219) 0:00:57.142 ****** 2026-02-03 03:36:58.722174 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:36:58.722179 | orchestrator | 2026-02-03 03:36:58.722183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722188 | orchestrator | Tuesday 03 February 2026 03:36:55 +0000 (0:00:00.724) 0:00:57.866 ****** 2026-02-03 03:36:58.722192 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457) 2026-02-03 03:36:58.722199 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457) 2026-02-03 03:36:58.722204 | orchestrator | 2026-02-03 03:36:58.722208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722213 | orchestrator | Tuesday 03 February 2026 03:36:56 +0000 (0:00:00.465) 0:00:58.332 ****** 2026-02-03 03:36:58.722279 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5) 2026-02-03 03:36:58.722292 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5) 2026-02-03 03:36:58.722296 | orchestrator | 2026-02-03 03:36:58.722300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722303 | orchestrator | Tuesday 03 February 2026 03:36:56 +0000 (0:00:00.482) 0:00:58.814 ****** 2026-02-03 03:36:58.722307 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8) 2026-02-03 03:36:58.722311 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8) 2026-02-03 03:36:58.722315 | orchestrator | 2026-02-03 03:36:58.722318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722322 | orchestrator | Tuesday 03 February 2026 03:36:57 +0000 (0:00:00.452) 0:00:59.267 ****** 2026-02-03 03:36:58.722326 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308) 2026-02-03 03:36:58.722330 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308) 2026-02-03 03:36:58.722334 | orchestrator | 2026-02-03 03:36:58.722338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-03 03:36:58.722341 | orchestrator | Tuesday 03 February 2026 03:36:57 +0000 (0:00:00.493) 0:00:59.760 ****** 2026-02-03 03:36:58.722345 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-03 03:36:58.722349 | orchestrator | 2026-02-03 03:36:58.722353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:36:58.722356 | orchestrator | Tuesday 03 February 2026 03:36:58 +0000 (0:00:00.369) 0:01:00.130 ****** 2026-02-03 03:36:58.722360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-03 03:36:58.722364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-03 03:36:58.722368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-03 03:36:58.722372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-03 03:36:58.722376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-03 03:36:58.722379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-03 03:36:58.722384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-03 03:36:58.722390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-03 03:36:58.722396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-03 03:36:58.722403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-03 03:36:58.722409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-03 03:36:58.722423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-03 03:37:08.105640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-03 03:37:08.105754 | orchestrator | 2026-02-03 03:37:08.105773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.105787 | orchestrator | Tuesday 03 February 2026 03:36:58 +0000 (0:00:00.495) 0:01:00.625 ****** 2026-02-03 03:37:08.105798 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.105810 | orchestrator | 2026-02-03 03:37:08.105821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.105850 | orchestrator | Tuesday 03 February 2026 03:36:58 +0000 (0:00:00.201) 0:01:00.827 ****** 2026-02-03 03:37:08.105862 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.105893 | orchestrator | 2026-02-03 03:37:08.105905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.105915 | orchestrator | Tuesday 03 February 2026 03:36:59 +0000 (0:00:00.219) 0:01:01.046 ****** 2026-02-03 03:37:08.105973 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.105985 | orchestrator | 2026-02-03 03:37:08.105996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106006 | orchestrator | Tuesday 03 February 2026 03:36:59 +0000 (0:00:00.228) 0:01:01.275 ****** 2026-02-03 03:37:08.106065 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106077 | orchestrator | 2026-02-03 03:37:08.106088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106100 | orchestrator | Tuesday 03 February 2026 03:36:59 +0000 (0:00:00.231) 0:01:01.506 ****** 2026-02-03 03:37:08.106111 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106121 | orchestrator | 2026-02-03 03:37:08.106132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106143 | orchestrator | Tuesday 03 February 2026 03:37:00 +0000 (0:00:00.705) 0:01:02.212 ****** 2026-02-03 03:37:08.106170 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106181 | orchestrator | 2026-02-03 03:37:08.106192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106203 | orchestrator | Tuesday 03 February 2026 03:37:00 +0000 (0:00:00.304) 0:01:02.516 ****** 2026-02-03 03:37:08.106214 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106225 | orchestrator | 2026-02-03 03:37:08.106236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106247 | orchestrator | Tuesday 03 February 2026 03:37:00 +0000 (0:00:00.244) 0:01:02.761 ****** 2026-02-03 03:37:08.106258 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106269 | orchestrator | 2026-02-03 03:37:08.106280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106291 | orchestrator | Tuesday 03 February 2026 03:37:01 +0000 (0:00:00.233) 0:01:02.994 ****** 2026-02-03 03:37:08.106302 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-03 03:37:08.106314 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-03 03:37:08.106326 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-03 03:37:08.106337 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-03 03:37:08.106348 | orchestrator | 2026-02-03 03:37:08.106358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106369 | orchestrator | Tuesday 03 February 2026 03:37:01 +0000 (0:00:00.684) 0:01:03.678 ****** 2026-02-03 03:37:08.106379 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106390 | orchestrator | 2026-02-03 03:37:08.106401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106412 | orchestrator | Tuesday 03 February 2026 03:37:01 +0000 (0:00:00.216) 0:01:03.895 ****** 2026-02-03 03:37:08.106423 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106446 | orchestrator | 2026-02-03 03:37:08.106458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106468 | orchestrator | Tuesday 03 February 2026 03:37:02 +0000 (0:00:00.226) 0:01:04.122 ****** 2026-02-03 03:37:08.106479 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106498 | orchestrator | 2026-02-03 03:37:08.106508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-03 03:37:08.106518 | orchestrator | Tuesday 03 February 2026 03:37:02 +0000 (0:00:00.209) 0:01:04.332 ****** 2026-02-03 03:37:08.106528 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106539 | orchestrator | 2026-02-03 03:37:08.106549 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-03 03:37:08.106559 | orchestrator | Tuesday 03 February 2026 03:37:02 +0000 (0:00:00.214) 0:01:04.546 ****** 2026-02-03 03:37:08.106569 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106580 | orchestrator | 2026-02-03 03:37:08.106601 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-03 03:37:08.106611 | orchestrator | Tuesday 03 February 2026 03:37:02 +0000 (0:00:00.141) 0:01:04.688 ****** 2026-02-03 03:37:08.106623 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9cbb71d1-90c1-5063-b304-f845b9e79bfb'}}) 2026-02-03 03:37:08.106635 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}}) 2026-02-03 03:37:08.106645 | orchestrator | 2026-02-03 03:37:08.106655 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-03 03:37:08.106665 | orchestrator | Tuesday 03 February 2026 03:37:02 +0000 (0:00:00.205) 0:01:04.893 ****** 2026-02-03 03:37:08.106677 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'}) 2026-02-03 03:37:08.106689 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}) 2026-02-03 03:37:08.106700 | orchestrator | 2026-02-03 03:37:08.106710 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-03 03:37:08.106740 | orchestrator | Tuesday 03 February 2026 03:37:04 +0000 (0:00:01.849) 0:01:06.742 ****** 2026-02-03 03:37:08.106750 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:08.106763 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:08.106773 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106784 | orchestrator | 2026-02-03 03:37:08.106801 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-03 03:37:08.106811 | orchestrator | Tuesday 03 February 2026 03:37:05 +0000 (0:00:00.385) 0:01:07.128 ****** 2026-02-03 03:37:08.106822 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'}) 2026-02-03 03:37:08.106830 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}) 2026-02-03 03:37:08.106836 | orchestrator | 2026-02-03 03:37:08.106842 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-03 03:37:08.106849 | orchestrator | Tuesday 03 February 2026 03:37:06 +0000 (0:00:01.393) 0:01:08.522 ****** 2026-02-03 03:37:08.106855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:08.106861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:08.106867 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106874 | orchestrator | 2026-02-03 03:37:08.106880 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-03 03:37:08.106886 | orchestrator | Tuesday 03 February 2026 03:37:06 +0000 (0:00:00.161) 0:01:08.683 ****** 2026-02-03 03:37:08.106892 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106899 | orchestrator | 2026-02-03 03:37:08.106905 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-03 03:37:08.106911 | orchestrator | Tuesday 03 February 2026 03:37:06 +0000 (0:00:00.161) 0:01:08.844 ****** 2026-02-03 03:37:08.106917 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:08.106953 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:08.106973 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.106983 | orchestrator | 2026-02-03 03:37:08.106993 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-03 03:37:08.107004 | orchestrator | Tuesday 03 February 2026 03:37:07 +0000 (0:00:00.159) 0:01:09.004 ****** 2026-02-03 03:37:08.107015 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.107026 | orchestrator | 2026-02-03 03:37:08.107037 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-03 03:37:08.107047 | orchestrator | Tuesday 03 February 2026 03:37:07 +0000 (0:00:00.155) 0:01:09.159 ****** 2026-02-03 03:37:08.107058 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:08.107065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:08.107071 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.107077 | orchestrator | 2026-02-03 03:37:08.107083 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-03 03:37:08.107088 | orchestrator | Tuesday 03 February 2026 03:37:07 +0000 (0:00:00.170) 0:01:09.330 ****** 2026-02-03 03:37:08.107093 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.107099 | orchestrator | 2026-02-03 03:37:08.107104 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-03 03:37:08.107110 | orchestrator | Tuesday 03 February 2026 03:37:07 +0000 (0:00:00.153) 0:01:09.483 ****** 2026-02-03 03:37:08.107115 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:08.107121 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:08.107126 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:08.107132 | orchestrator | 2026-02-03 03:37:08.107137 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-03 03:37:08.107143 | orchestrator | Tuesday 03 February 2026 03:37:07 +0000 (0:00:00.174) 0:01:09.658 ****** 2026-02-03 03:37:08.107148 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:08.107154 | orchestrator | 2026-02-03 03:37:08.107159 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-03 03:37:08.107165 | orchestrator | Tuesday 03 February 2026 03:37:07 +0000 (0:00:00.151) 0:01:09.810 ****** 2026-02-03 03:37:08.107177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:14.956711 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:14.956842 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.956864 | orchestrator | 2026-02-03 03:37:14.956884 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-03 03:37:14.956902 | orchestrator | Tuesday 03 February 2026 03:37:08 +0000 (0:00:00.205) 0:01:10.016 ****** 2026-02-03 03:37:14.957027 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:14.957042 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:14.957052 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.957062 | orchestrator | 2026-02-03 03:37:14.957080 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-03 03:37:14.957098 | orchestrator | Tuesday 03 February 2026 03:37:08 +0000 (0:00:00.164) 0:01:10.180 ****** 2026-02-03 03:37:14.957141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:14.957158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:14.957175 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.957191 | orchestrator | 2026-02-03 03:37:14.957209 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-03 03:37:14.957227 | orchestrator | Tuesday 03 February 2026 03:37:08 +0000 (0:00:00.408) 0:01:10.589 ****** 2026-02-03 03:37:14.957243 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.957260 | orchestrator | 2026-02-03 03:37:14.957275 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-03 03:37:14.957287 | orchestrator | Tuesday 03 February 2026 03:37:08 +0000 (0:00:00.145) 0:01:10.734 ****** 2026-02-03 03:37:14.957299 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.957311 | orchestrator | 2026-02-03 03:37:14.957322 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-03 03:37:14.957333 | orchestrator | Tuesday 03 February 2026 03:37:08 +0000 (0:00:00.142) 0:01:10.877 ****** 2026-02-03 03:37:14.957345 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.957357 | orchestrator | 2026-02-03 03:37:14.957369 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-03 03:37:14.957380 | orchestrator | Tuesday 03 February 2026 03:37:09 +0000 (0:00:00.142) 0:01:11.019 ****** 2026-02-03 03:37:14.957391 | orchestrator | ok: [testbed-node-5] => { 2026-02-03 03:37:14.957403 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-03 03:37:14.957415 | orchestrator | } 2026-02-03 03:37:14.957426 | orchestrator | 2026-02-03 03:37:14.957437 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-03 03:37:14.957449 | orchestrator | Tuesday 03 February 2026 03:37:09 +0000 (0:00:00.159) 0:01:11.179 ****** 2026-02-03 03:37:14.957460 | orchestrator | ok: [testbed-node-5] => { 2026-02-03 03:37:14.957490 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-03 03:37:14.957500 | orchestrator | } 2026-02-03 03:37:14.957510 | orchestrator | 2026-02-03 03:37:14.957520 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-03 03:37:14.957530 | orchestrator | Tuesday 03 February 2026 03:37:09 +0000 (0:00:00.157) 0:01:11.336 ****** 2026-02-03 03:37:14.957539 | orchestrator | ok: [testbed-node-5] => { 2026-02-03 03:37:14.957549 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-03 03:37:14.957559 | orchestrator | } 2026-02-03 03:37:14.957569 | orchestrator | 2026-02-03 03:37:14.957579 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-03 03:37:14.957588 | orchestrator | Tuesday 03 February 2026 03:37:09 +0000 (0:00:00.156) 0:01:11.493 ****** 2026-02-03 03:37:14.957598 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:14.957608 | orchestrator | 2026-02-03 03:37:14.957618 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-03 03:37:14.957627 | orchestrator | Tuesday 03 February 2026 03:37:10 +0000 (0:00:00.542) 0:01:12.035 ****** 2026-02-03 03:37:14.957637 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:14.957647 | orchestrator | 2026-02-03 03:37:14.957656 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-03 03:37:14.957666 | orchestrator | Tuesday 03 February 2026 03:37:10 +0000 (0:00:00.549) 0:01:12.585 ****** 2026-02-03 03:37:14.957676 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:14.957686 | orchestrator | 2026-02-03 03:37:14.957695 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-03 03:37:14.957705 | orchestrator | Tuesday 03 February 2026 03:37:11 +0000 (0:00:00.548) 0:01:13.133 ****** 2026-02-03 03:37:14.957715 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:14.957724 | orchestrator | 2026-02-03 03:37:14.957734 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-03 03:37:14.957753 | orchestrator | Tuesday 03 February 2026 03:37:11 +0000 (0:00:00.162) 0:01:13.296 ****** 2026-02-03 03:37:14.957763 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.957773 | orchestrator | 2026-02-03 03:37:14.957783 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-03 03:37:14.957792 | orchestrator | Tuesday 03 February 2026 03:37:11 +0000 (0:00:00.133) 0:01:13.429 ****** 2026-02-03 03:37:14.957802 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.957812 | orchestrator | 2026-02-03 03:37:14.957821 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-03 03:37:14.957831 | orchestrator | Tuesday 03 February 2026 03:37:11 +0000 (0:00:00.373) 0:01:13.803 ****** 2026-02-03 03:37:14.957840 | orchestrator | ok: [testbed-node-5] => { 2026-02-03 03:37:14.957850 | orchestrator |  "vgs_report": { 2026-02-03 03:37:14.957861 | orchestrator |  "vg": [] 2026-02-03 03:37:14.957891 | orchestrator |  } 2026-02-03 03:37:14.957909 | orchestrator | } 2026-02-03 03:37:14.957961 | orchestrator | 2026-02-03 03:37:14.957979 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-03 03:37:14.957994 | orchestrator | Tuesday 03 February 2026 03:37:12 +0000 (0:00:00.184) 0:01:13.987 ****** 2026-02-03 03:37:14.958008 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958092 | orchestrator | 2026-02-03 03:37:14.958109 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-03 03:37:14.958125 | orchestrator | Tuesday 03 February 2026 03:37:12 +0000 (0:00:00.156) 0:01:14.144 ****** 2026-02-03 03:37:14.958190 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958209 | orchestrator | 2026-02-03 03:37:14.958223 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-03 03:37:14.958233 | orchestrator | Tuesday 03 February 2026 03:37:12 +0000 (0:00:00.150) 0:01:14.295 ****** 2026-02-03 03:37:14.958243 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958252 | orchestrator | 2026-02-03 03:37:14.958262 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-03 03:37:14.958272 | orchestrator | Tuesday 03 February 2026 03:37:12 +0000 (0:00:00.139) 0:01:14.435 ****** 2026-02-03 03:37:14.958281 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958291 | orchestrator | 2026-02-03 03:37:14.958301 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-03 03:37:14.958310 | orchestrator | Tuesday 03 February 2026 03:37:12 +0000 (0:00:00.152) 0:01:14.588 ****** 2026-02-03 03:37:14.958320 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958329 | orchestrator | 2026-02-03 03:37:14.958339 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-03 03:37:14.958349 | orchestrator | Tuesday 03 February 2026 03:37:12 +0000 (0:00:00.164) 0:01:14.752 ****** 2026-02-03 03:37:14.958358 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958368 | orchestrator | 2026-02-03 03:37:14.958378 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-03 03:37:14.958389 | orchestrator | Tuesday 03 February 2026 03:37:12 +0000 (0:00:00.147) 0:01:14.900 ****** 2026-02-03 03:37:14.958406 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958421 | orchestrator | 2026-02-03 03:37:14.958437 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-03 03:37:14.958455 | orchestrator | Tuesday 03 February 2026 03:37:13 +0000 (0:00:00.157) 0:01:15.058 ****** 2026-02-03 03:37:14.958472 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958488 | orchestrator | 2026-02-03 03:37:14.958505 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-03 03:37:14.958537 | orchestrator | Tuesday 03 February 2026 03:37:13 +0000 (0:00:00.148) 0:01:15.206 ****** 2026-02-03 03:37:14.958555 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958570 | orchestrator | 2026-02-03 03:37:14.958588 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-03 03:37:14.958616 | orchestrator | Tuesday 03 February 2026 03:37:13 +0000 (0:00:00.147) 0:01:15.353 ****** 2026-02-03 03:37:14.958636 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958646 | orchestrator | 2026-02-03 03:37:14.958656 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-03 03:37:14.958666 | orchestrator | Tuesday 03 February 2026 03:37:13 +0000 (0:00:00.145) 0:01:15.499 ****** 2026-02-03 03:37:14.958676 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958685 | orchestrator | 2026-02-03 03:37:14.958695 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-03 03:37:14.958704 | orchestrator | Tuesday 03 February 2026 03:37:13 +0000 (0:00:00.391) 0:01:15.890 ****** 2026-02-03 03:37:14.958715 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958731 | orchestrator | 2026-02-03 03:37:14.958759 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-03 03:37:14.958774 | orchestrator | Tuesday 03 February 2026 03:37:14 +0000 (0:00:00.159) 0:01:16.049 ****** 2026-02-03 03:37:14.958790 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958804 | orchestrator | 2026-02-03 03:37:14.958820 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-03 03:37:14.958837 | orchestrator | Tuesday 03 February 2026 03:37:14 +0000 (0:00:00.144) 0:01:16.194 ****** 2026-02-03 03:37:14.958855 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958871 | orchestrator | 2026-02-03 03:37:14.958888 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-03 03:37:14.958904 | orchestrator | Tuesday 03 February 2026 03:37:14 +0000 (0:00:00.152) 0:01:16.347 ****** 2026-02-03 03:37:14.958940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:14.958952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:14.958962 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.958971 | orchestrator | 2026-02-03 03:37:14.958981 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-03 03:37:14.958991 | orchestrator | Tuesday 03 February 2026 03:37:14 +0000 (0:00:00.175) 0:01:16.522 ****** 2026-02-03 03:37:14.959001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:14.959010 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:14.959020 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:14.959030 | orchestrator | 2026-02-03 03:37:14.959039 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-03 03:37:14.959049 | orchestrator | Tuesday 03 February 2026 03:37:14 +0000 (0:00:00.160) 0:01:16.683 ****** 2026-02-03 03:37:14.959072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:18.220752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:18.220964 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:18.221646 | orchestrator | 2026-02-03 03:37:18.221699 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-03 03:37:18.221713 | orchestrator | Tuesday 03 February 2026 03:37:14 +0000 (0:00:00.186) 0:01:16.869 ****** 2026-02-03 03:37:18.221725 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:18.221737 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:18.221772 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:18.221784 | orchestrator | 2026-02-03 03:37:18.221796 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-03 03:37:18.221807 | orchestrator | Tuesday 03 February 2026 03:37:15 +0000 (0:00:00.168) 0:01:17.038 ****** 2026-02-03 03:37:18.221818 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:18.221830 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:18.221841 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:18.221852 | orchestrator | 2026-02-03 03:37:18.221863 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-03 03:37:18.221874 | orchestrator | Tuesday 03 February 2026 03:37:15 +0000 (0:00:00.164) 0:01:17.203 ****** 2026-02-03 03:37:18.221885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:18.221896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:18.221907 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:18.221951 | orchestrator | 2026-02-03 03:37:18.221968 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-03 03:37:18.221984 | orchestrator | Tuesday 03 February 2026 03:37:15 +0000 (0:00:00.169) 0:01:17.372 ****** 2026-02-03 03:37:18.222002 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:18.222058 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:18.222071 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:18.222082 | orchestrator | 2026-02-03 03:37:18.222138 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-03 03:37:18.222161 | orchestrator | Tuesday 03 February 2026 03:37:15 +0000 (0:00:00.163) 0:01:17.535 ****** 2026-02-03 03:37:18.222172 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:18.222184 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:18.222195 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:18.222206 | orchestrator | 2026-02-03 03:37:18.222218 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-03 03:37:18.222229 | orchestrator | Tuesday 03 February 2026 03:37:15 +0000 (0:00:00.189) 0:01:17.725 ****** 2026-02-03 03:37:18.222240 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:18.222252 | orchestrator | 2026-02-03 03:37:18.222263 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-03 03:37:18.222275 | orchestrator | Tuesday 03 February 2026 03:37:16 +0000 (0:00:00.764) 0:01:18.490 ****** 2026-02-03 03:37:18.222286 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:18.222297 | orchestrator | 2026-02-03 03:37:18.222308 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-03 03:37:18.222320 | orchestrator | Tuesday 03 February 2026 03:37:17 +0000 (0:00:00.564) 0:01:19.054 ****** 2026-02-03 03:37:18.222332 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:18.222343 | orchestrator | 2026-02-03 03:37:18.222354 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-03 03:37:18.222365 | orchestrator | Tuesday 03 February 2026 03:37:17 +0000 (0:00:00.177) 0:01:19.231 ****** 2026-02-03 03:37:18.222388 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'vg_name': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}) 2026-02-03 03:37:18.222401 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'vg_name': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'}) 2026-02-03 03:37:18.222412 | orchestrator | 2026-02-03 03:37:18.222423 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-03 03:37:18.222434 | orchestrator | Tuesday 03 February 2026 03:37:17 +0000 (0:00:00.191) 0:01:19.423 ****** 2026-02-03 03:37:18.222467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:18.222486 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:18.222497 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:18.222508 | orchestrator | 2026-02-03 03:37:18.222519 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-03 03:37:18.222531 | orchestrator | Tuesday 03 February 2026 03:37:17 +0000 (0:00:00.181) 0:01:19.604 ****** 2026-02-03 03:37:18.222542 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:18.222553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:18.222565 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:18.222576 | orchestrator | 2026-02-03 03:37:18.222588 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-03 03:37:18.222599 | orchestrator | Tuesday 03 February 2026 03:37:17 +0000 (0:00:00.179) 0:01:19.784 ****** 2026-02-03 03:37:18.222610 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 03:37:18.222621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 03:37:18.222632 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:18.222643 | orchestrator | 2026-02-03 03:37:18.222655 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-03 03:37:18.222666 | orchestrator | Tuesday 03 February 2026 03:37:18 +0000 (0:00:00.156) 0:01:19.941 ****** 2026-02-03 03:37:18.222677 | orchestrator | ok: [testbed-node-5] => { 2026-02-03 03:37:18.222688 | orchestrator |  "lvm_report": { 2026-02-03 03:37:18.222701 | orchestrator |  "lv": [ 2026-02-03 03:37:18.222712 | orchestrator |  { 2026-02-03 03:37:18.222723 | orchestrator |  "lv_name": "osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8", 2026-02-03 03:37:18.222736 | orchestrator |  "vg_name": "ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8" 2026-02-03 03:37:18.222747 | orchestrator |  }, 2026-02-03 03:37:18.222758 | orchestrator |  { 2026-02-03 03:37:18.222770 | orchestrator |  "lv_name": "osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb", 2026-02-03 03:37:18.222781 | orchestrator |  "vg_name": "ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb" 2026-02-03 03:37:18.222792 | orchestrator |  } 2026-02-03 03:37:18.222803 | orchestrator |  ], 2026-02-03 03:37:18.222814 | orchestrator |  "pv": [ 2026-02-03 03:37:18.222825 | orchestrator |  { 2026-02-03 03:37:18.222836 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-03 03:37:18.222848 | orchestrator |  "vg_name": "ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb" 2026-02-03 03:37:18.222859 | orchestrator |  }, 2026-02-03 03:37:18.222870 | orchestrator |  { 2026-02-03 03:37:18.222881 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-03 03:37:18.222904 | orchestrator |  "vg_name": "ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8" 2026-02-03 03:37:18.222946 | orchestrator |  } 2026-02-03 03:37:18.222965 | orchestrator |  ] 2026-02-03 03:37:18.222985 | orchestrator |  } 2026-02-03 03:37:18.223004 | orchestrator | } 2026-02-03 03:37:18.223023 | orchestrator | 2026-02-03 03:37:18.223039 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:37:18.223051 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-03 03:37:18.223063 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-03 03:37:18.223074 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-03 03:37:18.223085 | orchestrator | 2026-02-03 03:37:18.223096 | orchestrator | 2026-02-03 03:37:18.223107 | orchestrator | 2026-02-03 03:37:18.223118 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:37:18.223129 | orchestrator | Tuesday 03 February 2026 03:37:18 +0000 (0:00:00.165) 0:01:20.106 ****** 2026-02-03 03:37:18.223140 | orchestrator | =============================================================================== 2026-02-03 03:37:18.223151 | orchestrator | Create block VGs -------------------------------------------------------- 5.81s 2026-02-03 03:37:18.223162 | orchestrator | Create block LVs -------------------------------------------------------- 4.27s 2026-02-03 03:37:18.223173 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.92s 2026-02-03 03:37:18.223184 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2026-02-03 03:37:18.223195 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.70s 2026-02-03 03:37:18.223206 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2026-02-03 03:37:18.223216 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.62s 2026-02-03 03:37:18.223227 | orchestrator | Add known links to the list of available block devices ------------------ 1.46s 2026-02-03 03:37:18.223248 | orchestrator | Add known partitions to the list of available block devices ------------- 1.41s 2026-02-03 03:37:18.613535 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.29s 2026-02-03 03:37:18.613631 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-02-03 03:37:18.613645 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-02-03 03:37:18.613676 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.88s 2026-02-03 03:37:18.613694 | orchestrator | Print LVM report data --------------------------------------------------- 0.82s 2026-02-03 03:37:18.613711 | orchestrator | Get initial list of available block devices ----------------------------- 0.80s 2026-02-03 03:37:18.613727 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-02-03 03:37:18.613743 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.76s 2026-02-03 03:37:18.613760 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-02-03 03:37:18.613775 | orchestrator | Count OSDs put on ceph_db_wal_devices defined in lvm_volumes ------------ 0.75s 2026-02-03 03:37:18.613789 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2026-02-03 03:37:31.041465 | orchestrator | 2026-02-03 03:37:31 | INFO  | Task 16d37cbc-2e7b-4e43-afcb-043916fa1031 (facts) was prepared for execution. 2026-02-03 03:37:31.041561 | orchestrator | 2026-02-03 03:37:31 | INFO  | It takes a moment until task 16d37cbc-2e7b-4e43-afcb-043916fa1031 (facts) has been started and output is visible here. 2026-02-03 03:37:45.047031 | orchestrator | 2026-02-03 03:37:45.047130 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-03 03:37:45.047168 | orchestrator | 2026-02-03 03:37:45.047178 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-03 03:37:45.047187 | orchestrator | Tuesday 03 February 2026 03:37:35 +0000 (0:00:00.311) 0:00:00.311 ****** 2026-02-03 03:37:45.047195 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:37:45.047204 | orchestrator | ok: [testbed-manager] 2026-02-03 03:37:45.047212 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:37:45.047220 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:37:45.047228 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:37:45.047236 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:37:45.047244 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:45.047252 | orchestrator | 2026-02-03 03:37:45.047260 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-03 03:37:45.047269 | orchestrator | Tuesday 03 February 2026 03:37:36 +0000 (0:00:01.283) 0:00:01.594 ****** 2026-02-03 03:37:45.047282 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:37:45.047297 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:37:45.047309 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:37:45.047320 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:37:45.047334 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:37:45.047346 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:37:45.047360 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:45.047374 | orchestrator | 2026-02-03 03:37:45.047388 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-03 03:37:45.047396 | orchestrator | 2026-02-03 03:37:45.047404 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-03 03:37:45.047412 | orchestrator | Tuesday 03 February 2026 03:37:38 +0000 (0:00:01.513) 0:00:03.108 ****** 2026-02-03 03:37:45.047420 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:37:45.047428 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:37:45.047436 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:37:45.047444 | orchestrator | ok: [testbed-manager] 2026-02-03 03:37:45.047452 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:37:45.047460 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:37:45.047468 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:37:45.047476 | orchestrator | 2026-02-03 03:37:45.047484 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-03 03:37:45.047492 | orchestrator | 2026-02-03 03:37:45.047500 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-03 03:37:45.047508 | orchestrator | Tuesday 03 February 2026 03:37:43 +0000 (0:00:05.491) 0:00:08.599 ****** 2026-02-03 03:37:45.047516 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:37:45.047524 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:37:45.047532 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:37:45.047540 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:37:45.047548 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:37:45.047555 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:37:45.047563 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:37:45.047571 | orchestrator | 2026-02-03 03:37:45.047580 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:37:45.047590 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:37:45.047601 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:37:45.047611 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:37:45.047621 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:37:45.047630 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:37:45.047647 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:37:45.047656 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:37:45.047665 | orchestrator | 2026-02-03 03:37:45.047675 | orchestrator | 2026-02-03 03:37:45.047685 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:37:45.047709 | orchestrator | Tuesday 03 February 2026 03:37:44 +0000 (0:00:00.559) 0:00:09.159 ****** 2026-02-03 03:37:45.047719 | orchestrator | =============================================================================== 2026-02-03 03:37:45.047728 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.49s 2026-02-03 03:37:45.047738 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.51s 2026-02-03 03:37:45.047747 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.28s 2026-02-03 03:37:45.047755 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-02-03 03:37:47.552825 | orchestrator | 2026-02-03 03:37:47 | INFO  | Task de0a082c-cd5c-4587-95ed-e0e0e43885bd (ceph) was prepared for execution. 2026-02-03 03:37:47.552997 | orchestrator | 2026-02-03 03:37:47 | INFO  | It takes a moment until task de0a082c-cd5c-4587-95ed-e0e0e43885bd (ceph) has been started and output is visible here. 2026-02-03 03:38:06.702214 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-03 03:38:06.702308 | orchestrator | 2.16.14 2026-02-03 03:38:06.702317 | orchestrator | 2026-02-03 03:38:06.702322 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-03 03:38:06.702348 | orchestrator | 2026-02-03 03:38:06.702354 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 03:38:06.702360 | orchestrator | Tuesday 03 February 2026 03:37:52 +0000 (0:00:00.882) 0:00:00.882 ****** 2026-02-03 03:38:06.702366 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:38:06.702372 | orchestrator | 2026-02-03 03:38:06.702376 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 03:38:06.702381 | orchestrator | Tuesday 03 February 2026 03:37:54 +0000 (0:00:01.216) 0:00:02.099 ****** 2026-02-03 03:38:06.702386 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:06.702391 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:06.702396 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:06.702400 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:06.702405 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:06.702409 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:06.702414 | orchestrator | 2026-02-03 03:38:06.702419 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 03:38:06.702423 | orchestrator | Tuesday 03 February 2026 03:37:55 +0000 (0:00:01.394) 0:00:03.493 ****** 2026-02-03 03:38:06.702428 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:06.702432 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:06.702437 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:06.702441 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:06.702446 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:06.702450 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:06.702455 | orchestrator | 2026-02-03 03:38:06.702459 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 03:38:06.702464 | orchestrator | Tuesday 03 February 2026 03:37:56 +0000 (0:00:00.821) 0:00:04.315 ****** 2026-02-03 03:38:06.702468 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:06.702473 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:06.702477 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:06.702482 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:06.702503 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:06.702508 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:06.702512 | orchestrator | 2026-02-03 03:38:06.702517 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 03:38:06.702521 | orchestrator | Tuesday 03 February 2026 03:37:57 +0000 (0:00:00.957) 0:00:05.273 ****** 2026-02-03 03:38:06.702526 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:06.702530 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:06.702534 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:06.702539 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:06.702543 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:06.702548 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:06.702552 | orchestrator | 2026-02-03 03:38:06.702557 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 03:38:06.702561 | orchestrator | Tuesday 03 February 2026 03:37:58 +0000 (0:00:00.845) 0:00:06.118 ****** 2026-02-03 03:38:06.702566 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:06.702570 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:06.702574 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:06.702579 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:06.702583 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:06.702588 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:06.702592 | orchestrator | 2026-02-03 03:38:06.702596 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 03:38:06.702601 | orchestrator | Tuesday 03 February 2026 03:37:58 +0000 (0:00:00.608) 0:00:06.727 ****** 2026-02-03 03:38:06.702605 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:06.702610 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:06.702614 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:06.702618 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:06.702623 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:06.702627 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:06.702632 | orchestrator | 2026-02-03 03:38:06.702636 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 03:38:06.702641 | orchestrator | Tuesday 03 February 2026 03:37:59 +0000 (0:00:00.864) 0:00:07.592 ****** 2026-02-03 03:38:06.702645 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:06.702650 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:06.702655 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:06.702659 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:06.702664 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:06.702668 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:06.702673 | orchestrator | 2026-02-03 03:38:06.702677 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 03:38:06.702682 | orchestrator | Tuesday 03 February 2026 03:38:00 +0000 (0:00:00.649) 0:00:08.242 ****** 2026-02-03 03:38:06.702686 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:06.702690 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:06.702695 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:06.702699 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:06.702704 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:06.702719 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:06.702723 | orchestrator | 2026-02-03 03:38:06.702728 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 03:38:06.702733 | orchestrator | Tuesday 03 February 2026 03:38:01 +0000 (0:00:00.843) 0:00:09.085 ****** 2026-02-03 03:38:06.702737 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 03:38:06.702742 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:38:06.702746 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:38:06.702750 | orchestrator | 2026-02-03 03:38:06.702755 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 03:38:06.702759 | orchestrator | Tuesday 03 February 2026 03:38:01 +0000 (0:00:00.691) 0:00:09.777 ****** 2026-02-03 03:38:06.702769 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:06.702774 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:06.702778 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:06.702792 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:06.702797 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:06.702802 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:06.702806 | orchestrator | 2026-02-03 03:38:06.702811 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 03:38:06.702815 | orchestrator | Tuesday 03 February 2026 03:38:02 +0000 (0:00:00.767) 0:00:10.544 ****** 2026-02-03 03:38:06.702820 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 03:38:06.702824 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:38:06.702829 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:38:06.702833 | orchestrator | 2026-02-03 03:38:06.702838 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 03:38:06.702842 | orchestrator | Tuesday 03 February 2026 03:38:05 +0000 (0:00:02.560) 0:00:13.104 ****** 2026-02-03 03:38:06.702847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 03:38:06.702852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 03:38:06.702856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 03:38:06.702861 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:06.702915 | orchestrator | 2026-02-03 03:38:06.702923 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 03:38:06.702930 | orchestrator | Tuesday 03 February 2026 03:38:05 +0000 (0:00:00.491) 0:00:13.595 ****** 2026-02-03 03:38:06.702940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 03:38:06.702953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 03:38:06.702960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 03:38:06.702968 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:06.702975 | orchestrator | 2026-02-03 03:38:06.702982 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 03:38:06.702989 | orchestrator | Tuesday 03 February 2026 03:38:06 +0000 (0:00:00.679) 0:00:14.275 ****** 2026-02-03 03:38:06.702999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:06.703009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:06.703017 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:06.703028 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:06.703032 | orchestrator | 2026-02-03 03:38:06.703041 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 03:38:06.703046 | orchestrator | Tuesday 03 February 2026 03:38:06 +0000 (0:00:00.234) 0:00:14.509 ****** 2026-02-03 03:38:06.703059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 03:38:03.472659', 'end': '2026-02-03 03:38:03.525734', 'delta': '0:00:00.053075', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 03:38:16.732679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 03:38:04.085848', 'end': '2026-02-03 03:38:04.121082', 'delta': '0:00:00.035234', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 03:38:16.732781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 03:38:04.674703', 'end': '2026-02-03 03:38:04.715662', 'delta': '0:00:00.040959', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 03:38:16.732793 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.732802 | orchestrator | 2026-02-03 03:38:16.732811 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 03:38:16.732819 | orchestrator | Tuesday 03 February 2026 03:38:06 +0000 (0:00:00.180) 0:00:14.690 ****** 2026-02-03 03:38:16.732826 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:16.732832 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:16.732839 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:16.732844 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:16.732851 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:16.732901 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:16.732907 | orchestrator | 2026-02-03 03:38:16.732914 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 03:38:16.732921 | orchestrator | Tuesday 03 February 2026 03:38:07 +0000 (0:00:00.749) 0:00:15.439 ****** 2026-02-03 03:38:16.732927 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:38:16.732933 | orchestrator | 2026-02-03 03:38:16.732940 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 03:38:16.732946 | orchestrator | Tuesday 03 February 2026 03:38:08 +0000 (0:00:00.678) 0:00:16.118 ****** 2026-02-03 03:38:16.732974 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.732980 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.732987 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.732993 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733000 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733007 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733013 | orchestrator | 2026-02-03 03:38:16.733020 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 03:38:16.733027 | orchestrator | Tuesday 03 February 2026 03:38:09 +0000 (0:00:00.915) 0:00:17.033 ****** 2026-02-03 03:38:16.733034 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733040 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.733047 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.733053 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733060 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733066 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733072 | orchestrator | 2026-02-03 03:38:16.733078 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 03:38:16.733085 | orchestrator | Tuesday 03 February 2026 03:38:10 +0000 (0:00:01.216) 0:00:18.250 ****** 2026-02-03 03:38:16.733092 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733098 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.733105 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.733111 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733118 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733151 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733158 | orchestrator | 2026-02-03 03:38:16.733165 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 03:38:16.733172 | orchestrator | Tuesday 03 February 2026 03:38:10 +0000 (0:00:00.627) 0:00:18.877 ****** 2026-02-03 03:38:16.733179 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733185 | orchestrator | 2026-02-03 03:38:16.733192 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 03:38:16.733199 | orchestrator | Tuesday 03 February 2026 03:38:11 +0000 (0:00:00.131) 0:00:19.008 ****** 2026-02-03 03:38:16.733205 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733211 | orchestrator | 2026-02-03 03:38:16.733218 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 03:38:16.733225 | orchestrator | Tuesday 03 February 2026 03:38:11 +0000 (0:00:00.235) 0:00:19.243 ****** 2026-02-03 03:38:16.733232 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733238 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.733245 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.733252 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733258 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733266 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733273 | orchestrator | 2026-02-03 03:38:16.733297 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 03:38:16.733304 | orchestrator | Tuesday 03 February 2026 03:38:12 +0000 (0:00:00.802) 0:00:20.046 ****** 2026-02-03 03:38:16.733310 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733316 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.733322 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.733328 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733334 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733340 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733346 | orchestrator | 2026-02-03 03:38:16.733351 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 03:38:16.733358 | orchestrator | Tuesday 03 February 2026 03:38:12 +0000 (0:00:00.686) 0:00:20.732 ****** 2026-02-03 03:38:16.733365 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733371 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.733377 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.733391 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733398 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733404 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733412 | orchestrator | 2026-02-03 03:38:16.733418 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 03:38:16.733424 | orchestrator | Tuesday 03 February 2026 03:38:13 +0000 (0:00:00.827) 0:00:21.560 ****** 2026-02-03 03:38:16.733431 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733437 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.733443 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.733450 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733455 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733461 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733467 | orchestrator | 2026-02-03 03:38:16.733473 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 03:38:16.733480 | orchestrator | Tuesday 03 February 2026 03:38:14 +0000 (0:00:00.643) 0:00:22.204 ****** 2026-02-03 03:38:16.733486 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733491 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.733497 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.733503 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733509 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733515 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733522 | orchestrator | 2026-02-03 03:38:16.733528 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 03:38:16.733535 | orchestrator | Tuesday 03 February 2026 03:38:15 +0000 (0:00:00.830) 0:00:23.034 ****** 2026-02-03 03:38:16.733542 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733549 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.733555 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.733562 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733568 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733575 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733582 | orchestrator | 2026-02-03 03:38:16.733588 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 03:38:16.733597 | orchestrator | Tuesday 03 February 2026 03:38:15 +0000 (0:00:00.645) 0:00:23.680 ****** 2026-02-03 03:38:16.733604 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:16.733610 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:16.733617 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:16.733624 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:16.733630 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:16.733637 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:16.733644 | orchestrator | 2026-02-03 03:38:16.733650 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 03:38:16.733657 | orchestrator | Tuesday 03 February 2026 03:38:16 +0000 (0:00:00.926) 0:00:24.606 ****** 2026-02-03 03:38:16.733666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.733680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.733703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.842005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.842172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.842188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.842199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.843016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.843104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.843125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.843189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:16.843222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:16.843238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.843252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:16.843278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:16.843302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.002979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.003064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.003076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.003084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.003091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.003098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.003128 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:17.003148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.003153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.003157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.003179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.003186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.003200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.003209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.120175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.120275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.120460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.120481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.120500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.427467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.427537 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:17.427545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.427568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.427636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.427640 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:17.427644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.427656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.674609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.674735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.674750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.674763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.674789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.674828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.674926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.674954 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:17.674975 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:17.674996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.675017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.675046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.675059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.675070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.675082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.675093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.675115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:38:17.887600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.887711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:38:17.887728 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:17.887739 | orchestrator | 2026-02-03 03:38:17.887749 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 03:38:17.887760 | orchestrator | Tuesday 03 February 2026 03:38:17 +0000 (0:00:01.052) 0:00:25.659 ****** 2026-02-03 03:38:17.887771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.887817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.887828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.887839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.887906 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.887926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.887941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.887965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.887983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.963723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.963847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.963923 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.963982 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.964003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.964016 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.964028 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.964047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:17.964068 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394076 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394197 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394205 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394228 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394256 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394271 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394287 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.394294 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:18.394309 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.516740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.516947 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517020 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517037 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:18.517054 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517092 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517118 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517134 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517158 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517174 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517204 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.517234 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.569730 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.569904 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.569925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.569934 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.569943 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.569974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.569993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.570002 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.570132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.570166 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.767780 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.767939 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.767975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.767992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.768024 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:18.768038 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.768069 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.768080 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.768091 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.768101 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.768116 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.768134 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:18.768151 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067300 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067427 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067443 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:19.067453 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:19.067462 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067489 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067497 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067505 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067514 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067539 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067544 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067549 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:19.067561 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:30.383690 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:38:30.383788 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:30.383800 | orchestrator | 2026-02-03 03:38:30.383809 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 03:38:30.383818 | orchestrator | Tuesday 03 February 2026 03:38:19 +0000 (0:00:01.397) 0:00:27.056 ****** 2026-02-03 03:38:30.383825 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:30.383832 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:30.383839 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:30.383884 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:30.383891 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:30.383897 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:30.383904 | orchestrator | 2026-02-03 03:38:30.383911 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 03:38:30.383918 | orchestrator | Tuesday 03 February 2026 03:38:20 +0000 (0:00:01.005) 0:00:28.061 ****** 2026-02-03 03:38:30.383925 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:30.383931 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:30.383938 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:30.383944 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:30.383951 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:30.383957 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:30.383963 | orchestrator | 2026-02-03 03:38:30.383970 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 03:38:30.383977 | orchestrator | Tuesday 03 February 2026 03:38:20 +0000 (0:00:00.893) 0:00:28.955 ****** 2026-02-03 03:38:30.383983 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.383990 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:30.383996 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:30.384001 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:30.384007 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:30.384013 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:30.384019 | orchestrator | 2026-02-03 03:38:30.384025 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 03:38:30.384031 | orchestrator | Tuesday 03 February 2026 03:38:21 +0000 (0:00:00.599) 0:00:29.555 ****** 2026-02-03 03:38:30.384038 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.384044 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:30.384051 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:30.384057 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:30.384064 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:30.384070 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:30.384077 | orchestrator | 2026-02-03 03:38:30.384084 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 03:38:30.384091 | orchestrator | Tuesday 03 February 2026 03:38:22 +0000 (0:00:00.875) 0:00:30.431 ****** 2026-02-03 03:38:30.384097 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.384104 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:30.384110 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:30.384134 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:30.384141 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:30.384148 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:30.384154 | orchestrator | 2026-02-03 03:38:30.384161 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 03:38:30.384168 | orchestrator | Tuesday 03 February 2026 03:38:23 +0000 (0:00:00.657) 0:00:31.088 ****** 2026-02-03 03:38:30.384174 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.384181 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:30.384187 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:30.384194 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:30.384200 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:30.384206 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:30.384213 | orchestrator | 2026-02-03 03:38:30.384219 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 03:38:30.384226 | orchestrator | Tuesday 03 February 2026 03:38:23 +0000 (0:00:00.840) 0:00:31.929 ****** 2026-02-03 03:38:30.384233 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-03 03:38:30.384239 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-03 03:38:30.384246 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-03 03:38:30.384252 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-03 03:38:30.384258 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-03 03:38:30.384264 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-03 03:38:30.384270 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 03:38:30.384276 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-03 03:38:30.384282 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-03 03:38:30.384288 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-03 03:38:30.384294 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-03 03:38:30.384300 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-03 03:38:30.384306 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-03 03:38:30.384312 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-03 03:38:30.384319 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 03:38:30.384342 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-03 03:38:30.384349 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-03 03:38:30.384361 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 03:38:30.384368 | orchestrator | 2026-02-03 03:38:30.384374 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 03:38:30.384381 | orchestrator | Tuesday 03 February 2026 03:38:25 +0000 (0:00:01.794) 0:00:33.723 ****** 2026-02-03 03:38:30.384387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 03:38:30.384393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 03:38:30.384399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 03:38:30.384406 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.384413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 03:38:30.384419 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 03:38:30.384425 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 03:38:30.384432 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:30.384439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-03 03:38:30.384445 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-03 03:38:30.384451 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-03 03:38:30.384456 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:30.384462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 03:38:30.384468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 03:38:30.384480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 03:38:30.384487 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:30.384492 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 03:38:30.384498 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 03:38:30.384504 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 03:38:30.384510 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:30.384517 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 03:38:30.384523 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 03:38:30.384529 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 03:38:30.384535 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:30.384541 | orchestrator | 2026-02-03 03:38:30.384548 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 03:38:30.384554 | orchestrator | Tuesday 03 February 2026 03:38:26 +0000 (0:00:00.949) 0:00:34.673 ****** 2026-02-03 03:38:30.384560 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:30.384566 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:30.384572 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:30.384578 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:38:30.384585 | orchestrator | 2026-02-03 03:38:30.384591 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 03:38:30.384599 | orchestrator | Tuesday 03 February 2026 03:38:27 +0000 (0:00:01.109) 0:00:35.783 ****** 2026-02-03 03:38:30.384606 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.384611 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:30.384618 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:30.384624 | orchestrator | 2026-02-03 03:38:30.384631 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 03:38:30.384637 | orchestrator | Tuesday 03 February 2026 03:38:28 +0000 (0:00:00.343) 0:00:36.126 ****** 2026-02-03 03:38:30.384644 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.384650 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:30.384656 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:30.384662 | orchestrator | 2026-02-03 03:38:30.384668 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 03:38:30.384674 | orchestrator | Tuesday 03 February 2026 03:38:28 +0000 (0:00:00.361) 0:00:36.488 ****** 2026-02-03 03:38:30.384680 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.384686 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:30.384693 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:30.384699 | orchestrator | 2026-02-03 03:38:30.384705 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 03:38:30.384764 | orchestrator | Tuesday 03 February 2026 03:38:28 +0000 (0:00:00.351) 0:00:36.839 ****** 2026-02-03 03:38:30.384771 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:30.384777 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:30.384793 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:30.384799 | orchestrator | 2026-02-03 03:38:30.384806 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 03:38:30.384811 | orchestrator | Tuesday 03 February 2026 03:38:29 +0000 (0:00:00.709) 0:00:37.549 ****** 2026-02-03 03:38:30.384817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:38:30.384824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:38:30.384830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:38:30.384836 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.384892 | orchestrator | 2026-02-03 03:38:30.384899 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 03:38:30.384915 | orchestrator | Tuesday 03 February 2026 03:38:29 +0000 (0:00:00.390) 0:00:37.940 ****** 2026-02-03 03:38:30.384921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:38:30.384928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:38:30.384934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:38:30.384940 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:30.384947 | orchestrator | 2026-02-03 03:38:30.384965 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 03:38:50.302871 | orchestrator | Tuesday 03 February 2026 03:38:30 +0000 (0:00:00.429) 0:00:38.370 ****** 2026-02-03 03:38:50.303012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:38:50.303022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:38:50.303029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:38:50.303035 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:50.303042 | orchestrator | 2026-02-03 03:38:50.303049 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 03:38:50.303056 | orchestrator | Tuesday 03 February 2026 03:38:30 +0000 (0:00:00.398) 0:00:38.768 ****** 2026-02-03 03:38:50.303062 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:50.303070 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:50.303076 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:50.303082 | orchestrator | 2026-02-03 03:38:50.303087 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 03:38:50.303094 | orchestrator | Tuesday 03 February 2026 03:38:31 +0000 (0:00:00.361) 0:00:39.130 ****** 2026-02-03 03:38:50.303100 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 03:38:50.303106 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 03:38:50.303112 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 03:38:50.303119 | orchestrator | 2026-02-03 03:38:50.303125 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 03:38:50.303131 | orchestrator | Tuesday 03 February 2026 03:38:32 +0000 (0:00:01.077) 0:00:40.207 ****** 2026-02-03 03:38:50.303137 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 03:38:50.303144 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:38:50.303150 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:38:50.303156 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 03:38:50.303162 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 03:38:50.303168 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 03:38:50.303174 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 03:38:50.303180 | orchestrator | 2026-02-03 03:38:50.303186 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 03:38:50.303192 | orchestrator | Tuesday 03 February 2026 03:38:33 +0000 (0:00:00.816) 0:00:41.024 ****** 2026-02-03 03:38:50.303198 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 03:38:50.303204 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:38:50.303210 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:38:50.303216 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 03:38:50.303221 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 03:38:50.303227 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 03:38:50.303233 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 03:38:50.303239 | orchestrator | 2026-02-03 03:38:50.303245 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 03:38:50.303275 | orchestrator | Tuesday 03 February 2026 03:38:35 +0000 (0:00:02.036) 0:00:43.060 ****** 2026-02-03 03:38:50.303282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:38:50.303289 | orchestrator | 2026-02-03 03:38:50.303295 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 03:38:50.303301 | orchestrator | Tuesday 03 February 2026 03:38:36 +0000 (0:00:01.299) 0:00:44.360 ****** 2026-02-03 03:38:50.303307 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:38:50.303313 | orchestrator | 2026-02-03 03:38:50.303319 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 03:38:50.303325 | orchestrator | Tuesday 03 February 2026 03:38:37 +0000 (0:00:01.311) 0:00:45.672 ****** 2026-02-03 03:38:50.303332 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:50.303338 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:50.303344 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:50.303350 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:50.303356 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:50.303361 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:50.303367 | orchestrator | 2026-02-03 03:38:50.303373 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 03:38:50.303379 | orchestrator | Tuesday 03 February 2026 03:38:39 +0000 (0:00:01.351) 0:00:47.023 ****** 2026-02-03 03:38:50.303385 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:50.303391 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:50.303397 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:50.303403 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:50.303409 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:50.303414 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:50.303420 | orchestrator | 2026-02-03 03:38:50.303426 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 03:38:50.303432 | orchestrator | Tuesday 03 February 2026 03:38:39 +0000 (0:00:00.759) 0:00:47.782 ****** 2026-02-03 03:38:50.303438 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:50.303444 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:50.303450 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:50.303471 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:50.303478 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:50.303484 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:50.303490 | orchestrator | 2026-02-03 03:38:50.303500 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 03:38:50.303506 | orchestrator | Tuesday 03 February 2026 03:38:40 +0000 (0:00:00.919) 0:00:48.702 ****** 2026-02-03 03:38:50.303513 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:50.303518 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:50.303524 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:50.303530 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:50.303536 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:50.303542 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:50.303548 | orchestrator | 2026-02-03 03:38:50.303554 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 03:38:50.303560 | orchestrator | Tuesday 03 February 2026 03:38:41 +0000 (0:00:00.729) 0:00:49.431 ****** 2026-02-03 03:38:50.303566 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:50.303572 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:50.303578 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:50.303584 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:50.303590 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:50.303596 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:50.303602 | orchestrator | 2026-02-03 03:38:50.303608 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 03:38:50.303619 | orchestrator | Tuesday 03 February 2026 03:38:42 +0000 (0:00:01.293) 0:00:50.725 ****** 2026-02-03 03:38:50.303625 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:50.303631 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:50.303645 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:50.303652 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:50.303657 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:50.303663 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:50.303669 | orchestrator | 2026-02-03 03:38:50.303675 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 03:38:50.303681 | orchestrator | Tuesday 03 February 2026 03:38:43 +0000 (0:00:00.653) 0:00:51.378 ****** 2026-02-03 03:38:50.303687 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:50.303693 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:50.303699 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:50.303705 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:50.303711 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:50.303717 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:50.303722 | orchestrator | 2026-02-03 03:38:50.303728 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 03:38:50.303734 | orchestrator | Tuesday 03 February 2026 03:38:44 +0000 (0:00:00.920) 0:00:52.299 ****** 2026-02-03 03:38:50.303740 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:50.303746 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:50.303752 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:50.303758 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:50.303764 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:50.303769 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:50.303775 | orchestrator | 2026-02-03 03:38:50.303781 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 03:38:50.303787 | orchestrator | Tuesday 03 February 2026 03:38:45 +0000 (0:00:01.048) 0:00:53.348 ****** 2026-02-03 03:38:50.303793 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:50.303799 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:50.303805 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:50.303811 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:50.303817 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:50.303822 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:50.303842 | orchestrator | 2026-02-03 03:38:50.303848 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 03:38:50.303854 | orchestrator | Tuesday 03 February 2026 03:38:46 +0000 (0:00:01.418) 0:00:54.767 ****** 2026-02-03 03:38:50.303860 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:50.303866 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:50.303872 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:50.303878 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:50.303884 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:50.303889 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:50.303895 | orchestrator | 2026-02-03 03:38:50.303901 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 03:38:50.303907 | orchestrator | Tuesday 03 February 2026 03:38:47 +0000 (0:00:00.661) 0:00:55.428 ****** 2026-02-03 03:38:50.303913 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:38:50.303919 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:38:50.303925 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:38:50.303931 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:38:50.303937 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:38:50.303943 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:38:50.303949 | orchestrator | 2026-02-03 03:38:50.303955 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 03:38:50.303961 | orchestrator | Tuesday 03 February 2026 03:38:48 +0000 (0:00:00.888) 0:00:56.317 ****** 2026-02-03 03:38:50.303967 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:50.303973 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:50.303984 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:50.303989 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:50.303995 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:50.304002 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:50.304007 | orchestrator | 2026-02-03 03:38:50.304013 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 03:38:50.304020 | orchestrator | Tuesday 03 February 2026 03:38:49 +0000 (0:00:00.701) 0:00:57.019 ****** 2026-02-03 03:38:50.304026 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:50.304031 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:50.304037 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:38:50.304043 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:38:50.304049 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:38:50.304055 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:38:50.304061 | orchestrator | 2026-02-03 03:38:50.304067 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 03:38:50.304073 | orchestrator | Tuesday 03 February 2026 03:38:49 +0000 (0:00:00.876) 0:00:57.895 ****** 2026-02-03 03:38:50.304079 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:38:50.304085 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:38:50.304094 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:00.477087 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:00.477186 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:00.477206 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:00.477211 | orchestrator | 2026-02-03 03:40:00.477216 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 03:40:00.477223 | orchestrator | Tuesday 03 February 2026 03:38:50 +0000 (0:00:00.633) 0:00:58.528 ****** 2026-02-03 03:40:00.477227 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:00.477231 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:00.477235 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:00.477239 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:00.477243 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:00.477247 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:00.477250 | orchestrator | 2026-02-03 03:40:00.477254 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 03:40:00.477258 | orchestrator | Tuesday 03 February 2026 03:38:51 +0000 (0:00:00.869) 0:00:59.398 ****** 2026-02-03 03:40:00.477262 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:00.477266 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:00.477270 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:00.477274 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:00.477278 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:00.477282 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:00.477285 | orchestrator | 2026-02-03 03:40:00.477290 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 03:40:00.477294 | orchestrator | Tuesday 03 February 2026 03:38:52 +0000 (0:00:00.617) 0:01:00.015 ****** 2026-02-03 03:40:00.477297 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:00.477301 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:00.477305 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:00.477309 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:40:00.477314 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:40:00.477317 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:40:00.477321 | orchestrator | 2026-02-03 03:40:00.477325 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 03:40:00.477329 | orchestrator | Tuesday 03 February 2026 03:38:52 +0000 (0:00:00.869) 0:01:00.884 ****** 2026-02-03 03:40:00.477333 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:00.477337 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:00.477340 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:00.477344 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:40:00.477348 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:40:00.477352 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:40:00.477373 | orchestrator | 2026-02-03 03:40:00.477377 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 03:40:00.477381 | orchestrator | Tuesday 03 February 2026 03:38:53 +0000 (0:00:00.666) 0:01:01.551 ****** 2026-02-03 03:40:00.477385 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:00.477388 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:00.477392 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:00.477396 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:40:00.477400 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:40:00.477403 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:40:00.477408 | orchestrator | 2026-02-03 03:40:00.477411 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 03:40:00.477415 | orchestrator | Tuesday 03 February 2026 03:38:54 +0000 (0:00:01.340) 0:01:02.891 ****** 2026-02-03 03:40:00.477419 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:40:00.477423 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:40:00.477427 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:40:00.477431 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:40:00.477434 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:40:00.477438 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:40:00.477442 | orchestrator | 2026-02-03 03:40:00.477446 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 03:40:00.477449 | orchestrator | Tuesday 03 February 2026 03:38:56 +0000 (0:00:01.823) 0:01:04.715 ****** 2026-02-03 03:40:00.477453 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:40:00.477457 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:40:00.477461 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:40:00.477464 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:40:00.477468 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:40:00.477472 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:40:00.477476 | orchestrator | 2026-02-03 03:40:00.477479 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 03:40:00.477483 | orchestrator | Tuesday 03 February 2026 03:38:58 +0000 (0:00:02.124) 0:01:06.839 ****** 2026-02-03 03:40:00.477488 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:40:00.477493 | orchestrator | 2026-02-03 03:40:00.477497 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 03:40:00.477501 | orchestrator | Tuesday 03 February 2026 03:39:00 +0000 (0:00:01.454) 0:01:08.293 ****** 2026-02-03 03:40:00.477504 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:00.477508 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:00.477512 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:00.477516 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:00.477519 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:00.477523 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:00.477527 | orchestrator | 2026-02-03 03:40:00.477531 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 03:40:00.477534 | orchestrator | Tuesday 03 February 2026 03:39:00 +0000 (0:00:00.632) 0:01:08.926 ****** 2026-02-03 03:40:00.477538 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:00.477542 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:00.477546 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:00.477550 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:00.477553 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:00.477557 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:00.477561 | orchestrator | 2026-02-03 03:40:00.477565 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 03:40:00.477568 | orchestrator | Tuesday 03 February 2026 03:39:01 +0000 (0:00:00.830) 0:01:09.757 ****** 2026-02-03 03:40:00.477583 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 03:40:00.477591 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 03:40:00.477599 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 03:40:00.477603 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 03:40:00.477607 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 03:40:00.477611 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 03:40:00.477616 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 03:40:00.477620 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 03:40:00.477623 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 03:40:00.477627 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 03:40:00.477631 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 03:40:00.477635 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 03:40:00.477639 | orchestrator | 2026-02-03 03:40:00.477643 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 03:40:00.477648 | orchestrator | Tuesday 03 February 2026 03:39:03 +0000 (0:00:01.420) 0:01:11.177 ****** 2026-02-03 03:40:00.477652 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:40:00.477657 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:40:00.477661 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:40:00.477666 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:40:00.477671 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:40:00.477675 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:40:00.477680 | orchestrator | 2026-02-03 03:40:00.477684 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 03:40:00.477689 | orchestrator | Tuesday 03 February 2026 03:39:04 +0000 (0:00:01.187) 0:01:12.365 ****** 2026-02-03 03:40:00.477693 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:00.477698 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:00.477703 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:00.477707 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:00.477712 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:00.477716 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:00.477721 | orchestrator | 2026-02-03 03:40:00.477725 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 03:40:00.477730 | orchestrator | Tuesday 03 February 2026 03:39:05 +0000 (0:00:00.684) 0:01:13.050 ****** 2026-02-03 03:40:00.477734 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:00.477739 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:00.477743 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:00.477748 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:00.477752 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:00.477757 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:00.477761 | orchestrator | 2026-02-03 03:40:00.477765 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 03:40:00.477793 | orchestrator | Tuesday 03 February 2026 03:39:05 +0000 (0:00:00.855) 0:01:13.905 ****** 2026-02-03 03:40:00.477800 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:00.477805 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:00.477809 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:00.477814 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:00.477818 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:00.477823 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:00.477828 | orchestrator | 2026-02-03 03:40:00.477832 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 03:40:00.477837 | orchestrator | Tuesday 03 February 2026 03:39:06 +0000 (0:00:00.699) 0:01:14.605 ****** 2026-02-03 03:40:00.477846 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:40:00.477850 | orchestrator | 2026-02-03 03:40:00.477855 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 03:40:00.477859 | orchestrator | Tuesday 03 February 2026 03:39:07 +0000 (0:00:01.319) 0:01:15.925 ****** 2026-02-03 03:40:00.477864 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:00.477868 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:40:00.477872 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:40:00.477877 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:00.477881 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:40:00.477886 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:00.477890 | orchestrator | 2026-02-03 03:40:00.477895 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 03:40:00.477899 | orchestrator | Tuesday 03 February 2026 03:40:00 +0000 (0:00:52.164) 0:02:08.090 ****** 2026-02-03 03:40:00.477904 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 03:40:00.477909 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 03:40:00.477913 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 03:40:00.477918 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:00.477922 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 03:40:00.477927 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 03:40:00.477931 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 03:40:00.477936 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:00.477940 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 03:40:00.477948 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 03:40:25.507887 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 03:40:25.508049 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.508080 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 03:40:25.508103 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 03:40:25.508123 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 03:40:25.508143 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.508163 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 03:40:25.508184 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 03:40:25.508204 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 03:40:25.508223 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.508242 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 03:40:25.508261 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 03:40:25.508279 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 03:40:25.508299 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.508319 | orchestrator | 2026-02-03 03:40:25.508342 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 03:40:25.508361 | orchestrator | Tuesday 03 February 2026 03:40:00 +0000 (0:00:00.759) 0:02:08.849 ****** 2026-02-03 03:40:25.508380 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.508399 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.508421 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.508443 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.508464 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.508517 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.508538 | orchestrator | 2026-02-03 03:40:25.508559 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 03:40:25.508581 | orchestrator | Tuesday 03 February 2026 03:40:01 +0000 (0:00:00.969) 0:02:09.818 ****** 2026-02-03 03:40:25.508602 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.508622 | orchestrator | 2026-02-03 03:40:25.508641 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 03:40:25.508660 | orchestrator | Tuesday 03 February 2026 03:40:01 +0000 (0:00:00.157) 0:02:09.976 ****** 2026-02-03 03:40:25.508678 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.508696 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.508715 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.508735 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.508784 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.508804 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.508823 | orchestrator | 2026-02-03 03:40:25.508841 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 03:40:25.508860 | orchestrator | Tuesday 03 February 2026 03:40:02 +0000 (0:00:00.638) 0:02:10.614 ****** 2026-02-03 03:40:25.508879 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.508899 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.508917 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.508936 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.508955 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.508974 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.508993 | orchestrator | 2026-02-03 03:40:25.509012 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 03:40:25.509031 | orchestrator | Tuesday 03 February 2026 03:40:03 +0000 (0:00:00.925) 0:02:11.540 ****** 2026-02-03 03:40:25.509049 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.509069 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.509089 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.509109 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.509129 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.509147 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.509164 | orchestrator | 2026-02-03 03:40:25.509181 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 03:40:25.509200 | orchestrator | Tuesday 03 February 2026 03:40:04 +0000 (0:00:00.652) 0:02:12.192 ****** 2026-02-03 03:40:25.509219 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:25.509240 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:25.509259 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:25.509278 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:40:25.509296 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:40:25.509315 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:40:25.509333 | orchestrator | 2026-02-03 03:40:25.509353 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 03:40:25.509373 | orchestrator | Tuesday 03 February 2026 03:40:08 +0000 (0:00:03.878) 0:02:16.071 ****** 2026-02-03 03:40:25.509391 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:25.509409 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:25.509428 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:25.509447 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:40:25.509465 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:40:25.509484 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:40:25.509503 | orchestrator | 2026-02-03 03:40:25.509522 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 03:40:25.509541 | orchestrator | Tuesday 03 February 2026 03:40:08 +0000 (0:00:00.656) 0:02:16.727 ****** 2026-02-03 03:40:25.509562 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:40:25.509584 | orchestrator | 2026-02-03 03:40:25.509605 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 03:40:25.509643 | orchestrator | Tuesday 03 February 2026 03:40:10 +0000 (0:00:01.337) 0:02:18.064 ****** 2026-02-03 03:40:25.509663 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.509682 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.509699 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.509747 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.509810 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.509830 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.509850 | orchestrator | 2026-02-03 03:40:25.509869 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 03:40:25.509887 | orchestrator | Tuesday 03 February 2026 03:40:10 +0000 (0:00:00.843) 0:02:18.908 ****** 2026-02-03 03:40:25.509906 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.509925 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.509942 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.509960 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.509978 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.509997 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.510093 | orchestrator | 2026-02-03 03:40:25.510115 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 03:40:25.510133 | orchestrator | Tuesday 03 February 2026 03:40:11 +0000 (0:00:00.656) 0:02:19.564 ****** 2026-02-03 03:40:25.510152 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.510172 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.510192 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.510211 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.510230 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.510250 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.510270 | orchestrator | 2026-02-03 03:40:25.510291 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 03:40:25.510311 | orchestrator | Tuesday 03 February 2026 03:40:12 +0000 (0:00:00.892) 0:02:20.457 ****** 2026-02-03 03:40:25.510331 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.510350 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.510368 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.510388 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.510407 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.510426 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.510447 | orchestrator | 2026-02-03 03:40:25.510466 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 03:40:25.510485 | orchestrator | Tuesday 03 February 2026 03:40:13 +0000 (0:00:00.632) 0:02:21.089 ****** 2026-02-03 03:40:25.510504 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.510524 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.510545 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.510565 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.510585 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.510603 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.510620 | orchestrator | 2026-02-03 03:40:25.510639 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 03:40:25.510657 | orchestrator | Tuesday 03 February 2026 03:40:14 +0000 (0:00:00.945) 0:02:22.035 ****** 2026-02-03 03:40:25.510674 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.510693 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.510712 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.510732 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.510775 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.510794 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.510809 | orchestrator | 2026-02-03 03:40:25.510826 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 03:40:25.510845 | orchestrator | Tuesday 03 February 2026 03:40:14 +0000 (0:00:00.626) 0:02:22.662 ****** 2026-02-03 03:40:25.510899 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.510920 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.510940 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.510958 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.510978 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.510995 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.511012 | orchestrator | 2026-02-03 03:40:25.511029 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 03:40:25.511048 | orchestrator | Tuesday 03 February 2026 03:40:15 +0000 (0:00:00.918) 0:02:23.580 ****** 2026-02-03 03:40:25.511066 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:25.511084 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:25.511102 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:25.511119 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:25.511136 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:25.511154 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:25.511172 | orchestrator | 2026-02-03 03:40:25.511190 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 03:40:25.511208 | orchestrator | Tuesday 03 February 2026 03:40:16 +0000 (0:00:00.636) 0:02:24.216 ****** 2026-02-03 03:40:25.511225 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:25.511244 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:25.511263 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:25.511280 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:40:25.511299 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:40:25.511318 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:40:25.511336 | orchestrator | 2026-02-03 03:40:25.511354 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 03:40:25.511373 | orchestrator | Tuesday 03 February 2026 03:40:17 +0000 (0:00:01.393) 0:02:25.609 ****** 2026-02-03 03:40:25.511394 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:40:25.511415 | orchestrator | 2026-02-03 03:40:25.511434 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 03:40:25.511453 | orchestrator | Tuesday 03 February 2026 03:40:18 +0000 (0:00:01.352) 0:02:26.962 ****** 2026-02-03 03:40:25.511473 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-03 03:40:25.511492 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-03 03:40:25.511512 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-03 03:40:25.511532 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-03 03:40:25.511552 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-03 03:40:25.511569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-03 03:40:25.511604 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-03 03:40:29.459801 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-03 03:40:29.459938 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-03 03:40:29.459953 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-03 03:40:29.459965 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-03 03:40:29.459978 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-03 03:40:29.459990 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-03 03:40:29.460001 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-03 03:40:29.460013 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-03 03:40:29.460024 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-03 03:40:29.460035 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-03 03:40:29.460046 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-03 03:40:29.460057 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-03 03:40:29.460100 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-03 03:40:29.460112 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-03 03:40:29.460123 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-03 03:40:29.460134 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-03 03:40:29.460146 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-03 03:40:29.460157 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-03 03:40:29.460168 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-03 03:40:29.460179 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-03 03:40:29.460190 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-03 03:40:29.460201 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-03 03:40:29.460213 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-03 03:40:29.460225 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-03 03:40:29.460238 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-03 03:40:29.460250 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-03 03:40:29.460263 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-03 03:40:29.460276 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-03 03:40:29.460288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-03 03:40:29.460301 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-03 03:40:29.460313 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-03 03:40:29.460326 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-03 03:40:29.460338 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-03 03:40:29.460351 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 03:40:29.460363 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-03 03:40:29.460376 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-03 03:40:29.460388 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-03 03:40:29.460401 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-03 03:40:29.460413 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-03 03:40:29.460426 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 03:40:29.460439 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 03:40:29.460451 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-03 03:40:29.460463 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-03 03:40:29.460476 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-03 03:40:29.460488 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 03:40:29.460500 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 03:40:29.460513 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 03:40:29.460525 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 03:40:29.460539 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 03:40:29.460552 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 03:40:29.460564 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 03:40:29.460575 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 03:40:29.460586 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 03:40:29.460597 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 03:40:29.460616 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 03:40:29.460627 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 03:40:29.460638 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 03:40:29.460649 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 03:40:29.460661 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 03:40:29.460693 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 03:40:29.460712 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 03:40:29.460724 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 03:40:29.460735 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 03:40:29.460746 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 03:40:29.460777 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 03:40:29.460788 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 03:40:29.460799 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 03:40:29.460811 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 03:40:29.460822 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 03:40:29.460833 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-03 03:40:29.460844 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 03:40:29.460855 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 03:40:29.460866 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 03:40:29.460878 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 03:40:29.460889 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 03:40:29.460900 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-03 03:40:29.460911 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-03 03:40:29.460922 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 03:40:29.460934 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 03:40:29.460945 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-03 03:40:29.460957 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 03:40:29.460968 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-03 03:40:29.460979 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-03 03:40:29.460991 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-03 03:40:29.461002 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-03 03:40:29.461013 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-03 03:40:29.461024 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-03 03:40:29.461035 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-03 03:40:29.461047 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-03 03:40:29.461058 | orchestrator | 2026-02-03 03:40:29.461071 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 03:40:29.461083 | orchestrator | Tuesday 03 February 2026 03:40:25 +0000 (0:00:06.532) 0:02:33.494 ****** 2026-02-03 03:40:29.461094 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:29.461105 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:29.461116 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:29.461128 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:40:29.461149 | orchestrator | 2026-02-03 03:40:29.461161 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-03 03:40:29.461172 | orchestrator | Tuesday 03 February 2026 03:40:26 +0000 (0:00:01.083) 0:02:34.578 ****** 2026-02-03 03:40:29.461183 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 03:40:29.461195 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 03:40:29.461206 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 03:40:29.461217 | orchestrator | 2026-02-03 03:40:29.461228 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-03 03:40:29.461240 | orchestrator | Tuesday 03 February 2026 03:40:27 +0000 (0:00:00.708) 0:02:35.287 ****** 2026-02-03 03:40:29.461250 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 03:40:29.461262 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 03:40:29.461273 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 03:40:29.461283 | orchestrator | 2026-02-03 03:40:29.461295 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 03:40:29.461306 | orchestrator | Tuesday 03 February 2026 03:40:28 +0000 (0:00:01.297) 0:02:36.585 ****** 2026-02-03 03:40:29.461317 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:29.461328 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:29.461339 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:29.461364 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:29.461376 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:29.461398 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:29.461409 | orchestrator | 2026-02-03 03:40:29.461421 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 03:40:29.461444 | orchestrator | Tuesday 03 February 2026 03:40:29 +0000 (0:00:00.859) 0:02:37.444 ****** 2026-02-03 03:40:43.938553 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:43.938804 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:43.938827 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:43.938840 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.938853 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.938864 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.938876 | orchestrator | 2026-02-03 03:40:43.938890 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 03:40:43.938904 | orchestrator | Tuesday 03 February 2026 03:40:30 +0000 (0:00:00.613) 0:02:38.058 ****** 2026-02-03 03:40:43.938916 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.938927 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.938939 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.938951 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.938962 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.938973 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.938984 | orchestrator | 2026-02-03 03:40:43.938995 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 03:40:43.939006 | orchestrator | Tuesday 03 February 2026 03:40:30 +0000 (0:00:00.920) 0:02:38.979 ****** 2026-02-03 03:40:43.939018 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.939031 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.939044 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.939056 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.939069 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.939082 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.939129 | orchestrator | 2026-02-03 03:40:43.939144 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 03:40:43.939173 | orchestrator | Tuesday 03 February 2026 03:40:31 +0000 (0:00:00.615) 0:02:39.595 ****** 2026-02-03 03:40:43.939215 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.939238 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.939256 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.939274 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.939291 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.939308 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.939324 | orchestrator | 2026-02-03 03:40:43.939339 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 03:40:43.939358 | orchestrator | Tuesday 03 February 2026 03:40:32 +0000 (0:00:00.977) 0:02:40.572 ****** 2026-02-03 03:40:43.939375 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.939392 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.939408 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.939424 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.939441 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.939459 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.939477 | orchestrator | 2026-02-03 03:40:43.939495 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 03:40:43.939513 | orchestrator | Tuesday 03 February 2026 03:40:33 +0000 (0:00:00.677) 0:02:41.250 ****** 2026-02-03 03:40:43.939531 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.939551 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.939568 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.939585 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.939603 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.939620 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.939637 | orchestrator | 2026-02-03 03:40:43.939656 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 03:40:43.939676 | orchestrator | Tuesday 03 February 2026 03:40:34 +0000 (0:00:00.878) 0:02:42.129 ****** 2026-02-03 03:40:43.939695 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.939714 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.939734 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.939781 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.939800 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.939818 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.939836 | orchestrator | 2026-02-03 03:40:43.939853 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 03:40:43.939871 | orchestrator | Tuesday 03 February 2026 03:40:34 +0000 (0:00:00.676) 0:02:42.806 ****** 2026-02-03 03:40:43.939888 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.939907 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.939925 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.939943 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:43.939962 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:43.939979 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:43.939996 | orchestrator | 2026-02-03 03:40:43.940014 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 03:40:43.940033 | orchestrator | Tuesday 03 February 2026 03:40:37 +0000 (0:00:03.063) 0:02:45.869 ****** 2026-02-03 03:40:43.940051 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:43.940070 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:43.940089 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:43.940108 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.940127 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.940146 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.940163 | orchestrator | 2026-02-03 03:40:43.940182 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 03:40:43.940222 | orchestrator | Tuesday 03 February 2026 03:40:38 +0000 (0:00:00.622) 0:02:46.492 ****** 2026-02-03 03:40:43.940241 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:40:43.940260 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:40:43.940279 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:40:43.940297 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.940316 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.940332 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.940350 | orchestrator | 2026-02-03 03:40:43.940368 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 03:40:43.940386 | orchestrator | Tuesday 03 February 2026 03:40:39 +0000 (0:00:00.999) 0:02:47.491 ****** 2026-02-03 03:40:43.940405 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.940424 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.940467 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.940486 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.940538 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.940559 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.940579 | orchestrator | 2026-02-03 03:40:43.940598 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 03:40:43.940616 | orchestrator | Tuesday 03 February 2026 03:40:40 +0000 (0:00:00.688) 0:02:48.180 ****** 2026-02-03 03:40:43.940633 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 03:40:43.940654 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 03:40:43.940671 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 03:40:43.940689 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.940707 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.940725 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.940782 | orchestrator | 2026-02-03 03:40:43.940803 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 03:40:43.940821 | orchestrator | Tuesday 03 February 2026 03:40:41 +0000 (0:00:00.949) 0:02:49.130 ****** 2026-02-03 03:40:43.940842 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-03 03:40:43.940866 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-03 03:40:43.940885 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.940903 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-03 03:40:43.940922 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-03 03:40:43.940940 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.940957 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-03 03:40:43.940993 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-03 03:40:43.941013 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.941031 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.941048 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.941066 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.941086 | orchestrator | 2026-02-03 03:40:43.941104 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 03:40:43.941123 | orchestrator | Tuesday 03 February 2026 03:40:41 +0000 (0:00:00.674) 0:02:49.805 ****** 2026-02-03 03:40:43.941141 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.941159 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.941177 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.941195 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.941214 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.941232 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.941248 | orchestrator | 2026-02-03 03:40:43.941266 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 03:40:43.941278 | orchestrator | Tuesday 03 February 2026 03:40:42 +0000 (0:00:00.915) 0:02:50.720 ****** 2026-02-03 03:40:43.941289 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.941300 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:40:43.941310 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:40:43.941321 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:40:43.941331 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:40:43.941341 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:40:43.941350 | orchestrator | 2026-02-03 03:40:43.941360 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 03:40:43.941370 | orchestrator | Tuesday 03 February 2026 03:40:43 +0000 (0:00:00.629) 0:02:51.350 ****** 2026-02-03 03:40:43.941389 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:40:43.941412 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:41:02.275635 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:41:02.275807 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:02.275823 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:02.275830 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:02.275838 | orchestrator | 2026-02-03 03:41:02.275848 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 03:41:02.275857 | orchestrator | Tuesday 03 February 2026 03:40:44 +0000 (0:00:00.947) 0:02:52.297 ****** 2026-02-03 03:41:02.275864 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.275872 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:41:02.275880 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:41:02.275887 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:02.275894 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:02.275902 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:02.275909 | orchestrator | 2026-02-03 03:41:02.275917 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 03:41:02.275924 | orchestrator | Tuesday 03 February 2026 03:40:44 +0000 (0:00:00.665) 0:02:52.962 ****** 2026-02-03 03:41:02.275932 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.275939 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:41:02.275946 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:41:02.275953 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:02.275961 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:02.275968 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:02.275998 | orchestrator | 2026-02-03 03:41:02.276006 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 03:41:02.276014 | orchestrator | Tuesday 03 February 2026 03:40:45 +0000 (0:00:00.918) 0:02:53.882 ****** 2026-02-03 03:41:02.276021 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:41:02.276029 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:41:02.276036 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:41:02.276043 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:02.276051 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:02.276058 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:02.276066 | orchestrator | 2026-02-03 03:41:02.276073 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 03:41:02.276080 | orchestrator | Tuesday 03 February 2026 03:40:46 +0000 (0:00:00.879) 0:02:54.761 ****** 2026-02-03 03:41:02.276088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:41:02.276096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:41:02.276103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:41:02.276111 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.276118 | orchestrator | 2026-02-03 03:41:02.276128 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 03:41:02.276140 | orchestrator | Tuesday 03 February 2026 03:40:47 +0000 (0:00:00.426) 0:02:55.188 ****** 2026-02-03 03:41:02.276152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:41:02.276164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:41:02.276177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:41:02.276189 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.276202 | orchestrator | 2026-02-03 03:41:02.276214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 03:41:02.276226 | orchestrator | Tuesday 03 February 2026 03:40:47 +0000 (0:00:00.448) 0:02:55.637 ****** 2026-02-03 03:41:02.276238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:41:02.276249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:41:02.276261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:41:02.276274 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.276285 | orchestrator | 2026-02-03 03:41:02.276295 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 03:41:02.276306 | orchestrator | Tuesday 03 February 2026 03:40:48 +0000 (0:00:00.436) 0:02:56.074 ****** 2026-02-03 03:41:02.276317 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:41:02.276329 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:41:02.276339 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:41:02.276350 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:02.276361 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:02.276372 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:02.276384 | orchestrator | 2026-02-03 03:41:02.276395 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 03:41:02.276408 | orchestrator | Tuesday 03 February 2026 03:40:48 +0000 (0:00:00.667) 0:02:56.741 ****** 2026-02-03 03:41:02.276420 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 03:41:02.276434 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 03:41:02.276443 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-03 03:41:02.276450 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 03:41:02.276457 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:02.276464 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-03 03:41:02.276471 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:02.276479 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-03 03:41:02.276486 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:02.276493 | orchestrator | 2026-02-03 03:41:02.276501 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 03:41:02.276517 | orchestrator | Tuesday 03 February 2026 03:40:50 +0000 (0:00:01.810) 0:02:58.552 ****** 2026-02-03 03:41:02.276524 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:41:02.276531 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:41:02.276539 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:41:02.276546 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:41:02.276553 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:41:02.276560 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:41:02.276567 | orchestrator | 2026-02-03 03:41:02.276575 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-03 03:41:02.276582 | orchestrator | Tuesday 03 February 2026 03:40:53 +0000 (0:00:02.719) 0:03:01.271 ****** 2026-02-03 03:41:02.276589 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:41:02.276610 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:41:02.276618 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:41:02.276643 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:41:02.276651 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:41:02.276658 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:41:02.276665 | orchestrator | 2026-02-03 03:41:02.276673 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-03 03:41:02.276680 | orchestrator | Tuesday 03 February 2026 03:40:54 +0000 (0:00:01.043) 0:03:02.315 ****** 2026-02-03 03:41:02.276687 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.276694 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:41:02.276701 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:41:02.276709 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:41:02.276717 | orchestrator | 2026-02-03 03:41:02.276725 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-03 03:41:02.276755 | orchestrator | Tuesday 03 February 2026 03:40:55 +0000 (0:00:01.177) 0:03:03.492 ****** 2026-02-03 03:41:02.276763 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:02.276770 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:02.276778 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:02.276785 | orchestrator | 2026-02-03 03:41:02.276792 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-03 03:41:02.276799 | orchestrator | Tuesday 03 February 2026 03:40:55 +0000 (0:00:00.338) 0:03:03.830 ****** 2026-02-03 03:41:02.276807 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:41:02.276814 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:41:02.276821 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:41:02.276828 | orchestrator | 2026-02-03 03:41:02.276835 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-03 03:41:02.276843 | orchestrator | Tuesday 03 February 2026 03:40:57 +0000 (0:00:01.550) 0:03:05.381 ****** 2026-02-03 03:41:02.276850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 03:41:02.276857 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 03:41:02.276864 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 03:41:02.276871 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:02.276879 | orchestrator | 2026-02-03 03:41:02.276886 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-03 03:41:02.276893 | orchestrator | Tuesday 03 February 2026 03:40:58 +0000 (0:00:00.676) 0:03:06.058 ****** 2026-02-03 03:41:02.276900 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:02.276908 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:02.276915 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:02.276922 | orchestrator | 2026-02-03 03:41:02.276930 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-03 03:41:02.276937 | orchestrator | Tuesday 03 February 2026 03:40:58 +0000 (0:00:00.361) 0:03:06.419 ****** 2026-02-03 03:41:02.276944 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:02.276952 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:02.276959 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:02.276972 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:41:02.276979 | orchestrator | 2026-02-03 03:41:02.276987 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-03 03:41:02.276994 | orchestrator | Tuesday 03 February 2026 03:40:59 +0000 (0:00:01.118) 0:03:07.538 ****** 2026-02-03 03:41:02.277001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:41:02.277008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:41:02.277016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:41:02.277023 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.277031 | orchestrator | 2026-02-03 03:41:02.277038 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-03 03:41:02.277045 | orchestrator | Tuesday 03 February 2026 03:40:59 +0000 (0:00:00.418) 0:03:07.956 ****** 2026-02-03 03:41:02.277052 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.277060 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:41:02.277067 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:41:02.277074 | orchestrator | 2026-02-03 03:41:02.277081 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-03 03:41:02.277089 | orchestrator | Tuesday 03 February 2026 03:41:00 +0000 (0:00:00.358) 0:03:08.315 ****** 2026-02-03 03:41:02.277096 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.277104 | orchestrator | 2026-02-03 03:41:02.277117 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-03 03:41:02.277125 | orchestrator | Tuesday 03 February 2026 03:41:00 +0000 (0:00:00.250) 0:03:08.566 ****** 2026-02-03 03:41:02.277133 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.277140 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:41:02.277147 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:41:02.277155 | orchestrator | 2026-02-03 03:41:02.277162 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-03 03:41:02.277169 | orchestrator | Tuesday 03 February 2026 03:41:00 +0000 (0:00:00.325) 0:03:08.892 ****** 2026-02-03 03:41:02.277177 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.277184 | orchestrator | 2026-02-03 03:41:02.277191 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-03 03:41:02.277198 | orchestrator | Tuesday 03 February 2026 03:41:01 +0000 (0:00:00.724) 0:03:09.616 ****** 2026-02-03 03:41:02.277206 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.277213 | orchestrator | 2026-02-03 03:41:02.277220 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-03 03:41:02.277228 | orchestrator | Tuesday 03 February 2026 03:41:01 +0000 (0:00:00.249) 0:03:09.865 ****** 2026-02-03 03:41:02.277235 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.277242 | orchestrator | 2026-02-03 03:41:02.277250 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-03 03:41:02.277257 | orchestrator | Tuesday 03 February 2026 03:41:01 +0000 (0:00:00.123) 0:03:09.988 ****** 2026-02-03 03:41:02.277269 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:02.277276 | orchestrator | 2026-02-03 03:41:02.277289 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-03 03:41:20.808418 | orchestrator | Tuesday 03 February 2026 03:41:02 +0000 (0:00:00.273) 0:03:10.262 ****** 2026-02-03 03:41:20.808538 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:20.808557 | orchestrator | 2026-02-03 03:41:20.808571 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-03 03:41:20.808583 | orchestrator | Tuesday 03 February 2026 03:41:02 +0000 (0:00:00.244) 0:03:10.507 ****** 2026-02-03 03:41:20.808594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:41:20.808606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:41:20.808616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:41:20.808656 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:20.808668 | orchestrator | 2026-02-03 03:41:20.808679 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-03 03:41:20.808691 | orchestrator | Tuesday 03 February 2026 03:41:02 +0000 (0:00:00.464) 0:03:10.972 ****** 2026-02-03 03:41:20.808701 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:20.808712 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:41:20.808843 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:41:20.808855 | orchestrator | 2026-02-03 03:41:20.808866 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-03 03:41:20.808877 | orchestrator | Tuesday 03 February 2026 03:41:03 +0000 (0:00:00.309) 0:03:11.281 ****** 2026-02-03 03:41:20.808888 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:20.808899 | orchestrator | 2026-02-03 03:41:20.808910 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-03 03:41:20.808921 | orchestrator | Tuesday 03 February 2026 03:41:03 +0000 (0:00:00.232) 0:03:11.513 ****** 2026-02-03 03:41:20.808932 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:20.808943 | orchestrator | 2026-02-03 03:41:20.808954 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-03 03:41:20.808965 | orchestrator | Tuesday 03 February 2026 03:41:03 +0000 (0:00:00.267) 0:03:11.781 ****** 2026-02-03 03:41:20.808977 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:20.808988 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:20.808998 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:20.809010 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:41:20.809021 | orchestrator | 2026-02-03 03:41:20.809032 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-03 03:41:20.809044 | orchestrator | Tuesday 03 February 2026 03:41:04 +0000 (0:00:01.055) 0:03:12.837 ****** 2026-02-03 03:41:20.809055 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:41:20.809067 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:41:20.809078 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:41:20.809089 | orchestrator | 2026-02-03 03:41:20.809100 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-03 03:41:20.809111 | orchestrator | Tuesday 03 February 2026 03:41:05 +0000 (0:00:00.318) 0:03:13.155 ****** 2026-02-03 03:41:20.809122 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:41:20.809133 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:41:20.809144 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:41:20.809155 | orchestrator | 2026-02-03 03:41:20.809166 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-03 03:41:20.809177 | orchestrator | Tuesday 03 February 2026 03:41:06 +0000 (0:00:01.551) 0:03:14.707 ****** 2026-02-03 03:41:20.809188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:41:20.809199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:41:20.809210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:41:20.809221 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:20.809232 | orchestrator | 2026-02-03 03:41:20.809243 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-03 03:41:20.809255 | orchestrator | Tuesday 03 February 2026 03:41:07 +0000 (0:00:00.718) 0:03:15.425 ****** 2026-02-03 03:41:20.809266 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:41:20.809276 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:41:20.809287 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:41:20.809298 | orchestrator | 2026-02-03 03:41:20.809309 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-03 03:41:20.809320 | orchestrator | Tuesday 03 February 2026 03:41:07 +0000 (0:00:00.372) 0:03:15.798 ****** 2026-02-03 03:41:20.809331 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:20.809342 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:20.809353 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:20.809372 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:41:20.809384 | orchestrator | 2026-02-03 03:41:20.809395 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-03 03:41:20.809406 | orchestrator | Tuesday 03 February 2026 03:41:08 +0000 (0:00:01.172) 0:03:16.970 ****** 2026-02-03 03:41:20.809417 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:41:20.809428 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:41:20.809440 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:41:20.809456 | orchestrator | 2026-02-03 03:41:20.809474 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-03 03:41:20.809492 | orchestrator | Tuesday 03 February 2026 03:41:09 +0000 (0:00:00.389) 0:03:17.360 ****** 2026-02-03 03:41:20.809509 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:41:20.809528 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:41:20.809541 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:41:20.809552 | orchestrator | 2026-02-03 03:41:20.809563 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-03 03:41:20.809574 | orchestrator | Tuesday 03 February 2026 03:41:10 +0000 (0:00:01.249) 0:03:18.609 ****** 2026-02-03 03:41:20.809585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:41:20.809596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:41:20.809622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:41:20.809651 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:20.809663 | orchestrator | 2026-02-03 03:41:20.809674 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-03 03:41:20.809686 | orchestrator | Tuesday 03 February 2026 03:41:11 +0000 (0:00:00.917) 0:03:19.527 ****** 2026-02-03 03:41:20.809697 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:41:20.809707 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:41:20.809741 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:41:20.809753 | orchestrator | 2026-02-03 03:41:20.809764 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-03 03:41:20.809775 | orchestrator | Tuesday 03 February 2026 03:41:12 +0000 (0:00:00.624) 0:03:20.152 ****** 2026-02-03 03:41:20.809786 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:20.809797 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:41:20.809807 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:41:20.809818 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:20.809829 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:20.809839 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:20.809850 | orchestrator | 2026-02-03 03:41:20.809861 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-03 03:41:20.809872 | orchestrator | Tuesday 03 February 2026 03:41:12 +0000 (0:00:00.695) 0:03:20.848 ****** 2026-02-03 03:41:20.809883 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:41:20.809894 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:41:20.809904 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:41:20.809915 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:41:20.809926 | orchestrator | 2026-02-03 03:41:20.809937 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-03 03:41:20.809948 | orchestrator | Tuesday 03 February 2026 03:41:13 +0000 (0:00:01.133) 0:03:21.982 ****** 2026-02-03 03:41:20.809959 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:20.809970 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:20.809981 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:20.809991 | orchestrator | 2026-02-03 03:41:20.810002 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-03 03:41:20.810013 | orchestrator | Tuesday 03 February 2026 03:41:14 +0000 (0:00:00.363) 0:03:22.345 ****** 2026-02-03 03:41:20.810082 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:41:20.810102 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:41:20.810113 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:41:20.810124 | orchestrator | 2026-02-03 03:41:20.810135 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-03 03:41:20.810183 | orchestrator | Tuesday 03 February 2026 03:41:15 +0000 (0:00:01.290) 0:03:23.635 ****** 2026-02-03 03:41:20.810194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 03:41:20.810205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 03:41:20.810216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 03:41:20.810227 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:20.810238 | orchestrator | 2026-02-03 03:41:20.810250 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-03 03:41:20.810261 | orchestrator | Tuesday 03 February 2026 03:41:16 +0000 (0:00:00.944) 0:03:24.580 ****** 2026-02-03 03:41:20.810271 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:20.810282 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:20.810293 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:20.810304 | orchestrator | 2026-02-03 03:41:20.810315 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-03 03:41:20.810325 | orchestrator | 2026-02-03 03:41:20.810337 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 03:41:20.810347 | orchestrator | Tuesday 03 February 2026 03:41:17 +0000 (0:00:00.858) 0:03:25.439 ****** 2026-02-03 03:41:20.810360 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:41:20.810372 | orchestrator | 2026-02-03 03:41:20.810383 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 03:41:20.810394 | orchestrator | Tuesday 03 February 2026 03:41:17 +0000 (0:00:00.531) 0:03:25.971 ****** 2026-02-03 03:41:20.810405 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:41:20.810416 | orchestrator | 2026-02-03 03:41:20.810427 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 03:41:20.810438 | orchestrator | Tuesday 03 February 2026 03:41:18 +0000 (0:00:00.817) 0:03:26.789 ****** 2026-02-03 03:41:20.810448 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:20.810459 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:20.810470 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:20.810481 | orchestrator | 2026-02-03 03:41:20.810492 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 03:41:20.810503 | orchestrator | Tuesday 03 February 2026 03:41:19 +0000 (0:00:00.771) 0:03:27.560 ****** 2026-02-03 03:41:20.810513 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:20.810524 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:20.810535 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:20.810546 | orchestrator | 2026-02-03 03:41:20.810557 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 03:41:20.810568 | orchestrator | Tuesday 03 February 2026 03:41:20 +0000 (0:00:00.568) 0:03:28.129 ****** 2026-02-03 03:41:20.810578 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:20.810589 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:20.810600 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:20.810611 | orchestrator | 2026-02-03 03:41:20.810622 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 03:41:20.810632 | orchestrator | Tuesday 03 February 2026 03:41:20 +0000 (0:00:00.332) 0:03:28.462 ****** 2026-02-03 03:41:20.810643 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:20.810654 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:20.810671 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:20.810682 | orchestrator | 2026-02-03 03:41:20.810693 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 03:41:20.810713 | orchestrator | Tuesday 03 February 2026 03:41:20 +0000 (0:00:00.331) 0:03:28.794 ****** 2026-02-03 03:41:42.544870 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.544992 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.545015 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.545032 | orchestrator | 2026-02-03 03:41:42.545050 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 03:41:42.545063 | orchestrator | Tuesday 03 February 2026 03:41:21 +0000 (0:00:00.738) 0:03:29.532 ****** 2026-02-03 03:41:42.545073 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:42.545083 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:42.545092 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:42.545101 | orchestrator | 2026-02-03 03:41:42.545110 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 03:41:42.545119 | orchestrator | Tuesday 03 February 2026 03:41:22 +0000 (0:00:00.619) 0:03:30.152 ****** 2026-02-03 03:41:42.545127 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:42.545136 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:42.545145 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:42.545153 | orchestrator | 2026-02-03 03:41:42.545162 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 03:41:42.545171 | orchestrator | Tuesday 03 February 2026 03:41:22 +0000 (0:00:00.330) 0:03:30.483 ****** 2026-02-03 03:41:42.545180 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.545188 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.545197 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.545206 | orchestrator | 2026-02-03 03:41:42.545215 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 03:41:42.545224 | orchestrator | Tuesday 03 February 2026 03:41:23 +0000 (0:00:00.764) 0:03:31.247 ****** 2026-02-03 03:41:42.545232 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.545241 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.545249 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.545258 | orchestrator | 2026-02-03 03:41:42.545267 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 03:41:42.545276 | orchestrator | Tuesday 03 February 2026 03:41:24 +0000 (0:00:00.776) 0:03:32.024 ****** 2026-02-03 03:41:42.545284 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:42.545293 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:42.545302 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:42.545311 | orchestrator | 2026-02-03 03:41:42.545319 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 03:41:42.545328 | orchestrator | Tuesday 03 February 2026 03:41:24 +0000 (0:00:00.586) 0:03:32.610 ****** 2026-02-03 03:41:42.545337 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.545346 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.545355 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.545363 | orchestrator | 2026-02-03 03:41:42.545372 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 03:41:42.545381 | orchestrator | Tuesday 03 February 2026 03:41:24 +0000 (0:00:00.358) 0:03:32.969 ****** 2026-02-03 03:41:42.545390 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:42.545399 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:42.545408 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:42.545418 | orchestrator | 2026-02-03 03:41:42.545428 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 03:41:42.545439 | orchestrator | Tuesday 03 February 2026 03:41:25 +0000 (0:00:00.327) 0:03:33.296 ****** 2026-02-03 03:41:42.545449 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:42.545459 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:42.545469 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:42.545479 | orchestrator | 2026-02-03 03:41:42.545490 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 03:41:42.545500 | orchestrator | Tuesday 03 February 2026 03:41:25 +0000 (0:00:00.348) 0:03:33.644 ****** 2026-02-03 03:41:42.545511 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:42.545553 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:42.545568 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:42.545583 | orchestrator | 2026-02-03 03:41:42.545596 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 03:41:42.545611 | orchestrator | Tuesday 03 February 2026 03:41:26 +0000 (0:00:00.589) 0:03:34.234 ****** 2026-02-03 03:41:42.545625 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:42.545639 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:42.545654 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:42.545668 | orchestrator | 2026-02-03 03:41:42.545684 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 03:41:42.545699 | orchestrator | Tuesday 03 February 2026 03:41:26 +0000 (0:00:00.338) 0:03:34.573 ****** 2026-02-03 03:41:42.545766 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:42.545782 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:41:42.545796 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:41:42.545811 | orchestrator | 2026-02-03 03:41:42.545828 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 03:41:42.545844 | orchestrator | Tuesday 03 February 2026 03:41:26 +0000 (0:00:00.345) 0:03:34.919 ****** 2026-02-03 03:41:42.545860 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.545877 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.545893 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.545908 | orchestrator | 2026-02-03 03:41:42.545924 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 03:41:42.545939 | orchestrator | Tuesday 03 February 2026 03:41:27 +0000 (0:00:00.346) 0:03:35.265 ****** 2026-02-03 03:41:42.545954 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.545968 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.545983 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.545998 | orchestrator | 2026-02-03 03:41:42.546012 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 03:41:42.546101 | orchestrator | Tuesday 03 February 2026 03:41:27 +0000 (0:00:00.408) 0:03:35.674 ****** 2026-02-03 03:41:42.546111 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.546120 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.546128 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.546138 | orchestrator | 2026-02-03 03:41:42.546171 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-03 03:41:42.546188 | orchestrator | Tuesday 03 February 2026 03:41:28 +0000 (0:00:00.907) 0:03:36.581 ****** 2026-02-03 03:41:42.546202 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.546216 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.546257 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.546274 | orchestrator | 2026-02-03 03:41:42.546288 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-03 03:41:42.546303 | orchestrator | Tuesday 03 February 2026 03:41:28 +0000 (0:00:00.346) 0:03:36.928 ****** 2026-02-03 03:41:42.546320 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:41:42.546335 | orchestrator | 2026-02-03 03:41:42.546350 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-03 03:41:42.546365 | orchestrator | Tuesday 03 February 2026 03:41:29 +0000 (0:00:00.880) 0:03:37.808 ****** 2026-02-03 03:41:42.546381 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:41:42.546397 | orchestrator | 2026-02-03 03:41:42.546413 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-03 03:41:42.546428 | orchestrator | Tuesday 03 February 2026 03:41:29 +0000 (0:00:00.173) 0:03:37.982 ****** 2026-02-03 03:41:42.546443 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-03 03:41:42.546457 | orchestrator | 2026-02-03 03:41:42.546466 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-03 03:41:42.546475 | orchestrator | Tuesday 03 February 2026 03:41:31 +0000 (0:00:01.058) 0:03:39.040 ****** 2026-02-03 03:41:42.546500 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.546510 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.546518 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.546527 | orchestrator | 2026-02-03 03:41:42.546536 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-03 03:41:42.546544 | orchestrator | Tuesday 03 February 2026 03:41:31 +0000 (0:00:00.382) 0:03:39.423 ****** 2026-02-03 03:41:42.546553 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.546562 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.546570 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.546579 | orchestrator | 2026-02-03 03:41:42.546587 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-03 03:41:42.546596 | orchestrator | Tuesday 03 February 2026 03:41:31 +0000 (0:00:00.359) 0:03:39.783 ****** 2026-02-03 03:41:42.546605 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:41:42.546614 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:41:42.546623 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:41:42.546631 | orchestrator | 2026-02-03 03:41:42.546640 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-03 03:41:42.546649 | orchestrator | Tuesday 03 February 2026 03:41:33 +0000 (0:00:01.528) 0:03:41.311 ****** 2026-02-03 03:41:42.546659 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:41:42.546674 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:41:42.546686 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:41:42.546697 | orchestrator | 2026-02-03 03:41:42.546743 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-03 03:41:42.546758 | orchestrator | Tuesday 03 February 2026 03:41:34 +0000 (0:00:00.801) 0:03:42.113 ****** 2026-02-03 03:41:42.546772 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:41:42.546784 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:41:42.546797 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:41:42.546812 | orchestrator | 2026-02-03 03:41:42.546826 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-03 03:41:42.546839 | orchestrator | Tuesday 03 February 2026 03:41:34 +0000 (0:00:00.701) 0:03:42.814 ****** 2026-02-03 03:41:42.546853 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.546867 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:41:42.546881 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:41:42.546895 | orchestrator | 2026-02-03 03:41:42.546909 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-03 03:41:42.546922 | orchestrator | Tuesday 03 February 2026 03:41:35 +0000 (0:00:00.678) 0:03:43.493 ****** 2026-02-03 03:41:42.546937 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:41:42.546951 | orchestrator | 2026-02-03 03:41:42.546965 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-03 03:41:42.546978 | orchestrator | Tuesday 03 February 2026 03:41:37 +0000 (0:00:01.918) 0:03:45.412 ****** 2026-02-03 03:41:42.546992 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:41:42.547007 | orchestrator | 2026-02-03 03:41:42.547021 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-03 03:41:42.547036 | orchestrator | Tuesday 03 February 2026 03:41:38 +0000 (0:00:00.738) 0:03:46.150 ****** 2026-02-03 03:41:42.547050 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-03 03:41:42.547065 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:41:42.547080 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:41:42.547094 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 03:41:42.547110 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-03 03:41:42.547125 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 03:41:42.547139 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 03:41:42.547154 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-03 03:41:42.547170 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 03:41:42.547198 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-03 03:41:42.547213 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-03 03:41:42.547227 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-03 03:41:42.547241 | orchestrator | 2026-02-03 03:41:42.547257 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-03 03:41:42.547271 | orchestrator | Tuesday 03 February 2026 03:41:41 +0000 (0:00:03.196) 0:03:49.347 ****** 2026-02-03 03:41:42.547286 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:41:42.547296 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:41:42.547319 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:41:42.547335 | orchestrator | 2026-02-03 03:41:42.547350 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-03 03:41:42.547380 | orchestrator | Tuesday 03 February 2026 03:41:42 +0000 (0:00:01.184) 0:03:50.531 ****** 2026-02-03 03:42:44.587539 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:42:44.587618 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:42:44.587624 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:42:44.587628 | orchestrator | 2026-02-03 03:42:44.587634 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-03 03:42:44.587640 | orchestrator | Tuesday 03 February 2026 03:41:42 +0000 (0:00:00.369) 0:03:50.901 ****** 2026-02-03 03:42:44.587644 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:42:44.587648 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:42:44.587652 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:42:44.587656 | orchestrator | 2026-02-03 03:42:44.587661 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-03 03:42:44.587665 | orchestrator | Tuesday 03 February 2026 03:41:43 +0000 (0:00:00.652) 0:03:51.554 ****** 2026-02-03 03:42:44.587701 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:42:44.587707 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:42:44.587711 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:42:44.587715 | orchestrator | 2026-02-03 03:42:44.587719 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-03 03:42:44.587723 | orchestrator | Tuesday 03 February 2026 03:41:45 +0000 (0:00:01.522) 0:03:53.076 ****** 2026-02-03 03:42:44.587727 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:42:44.587731 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:42:44.587734 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:42:44.587738 | orchestrator | 2026-02-03 03:42:44.587742 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-03 03:42:44.587746 | orchestrator | Tuesday 03 February 2026 03:41:46 +0000 (0:00:01.287) 0:03:54.363 ****** 2026-02-03 03:42:44.587750 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:42:44.587754 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:42:44.587758 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:42:44.587762 | orchestrator | 2026-02-03 03:42:44.587766 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-03 03:42:44.587770 | orchestrator | Tuesday 03 February 2026 03:41:46 +0000 (0:00:00.325) 0:03:54.689 ****** 2026-02-03 03:42:44.587774 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:42:44.587778 | orchestrator | 2026-02-03 03:42:44.587783 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-03 03:42:44.587787 | orchestrator | Tuesday 03 February 2026 03:41:47 +0000 (0:00:00.893) 0:03:55.583 ****** 2026-02-03 03:42:44.587791 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:42:44.587795 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:42:44.587798 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:42:44.587802 | orchestrator | 2026-02-03 03:42:44.587806 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-03 03:42:44.587810 | orchestrator | Tuesday 03 February 2026 03:41:47 +0000 (0:00:00.346) 0:03:55.929 ****** 2026-02-03 03:42:44.587814 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:42:44.587837 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:42:44.587841 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:42:44.587845 | orchestrator | 2026-02-03 03:42:44.587849 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-03 03:42:44.587853 | orchestrator | Tuesday 03 February 2026 03:41:48 +0000 (0:00:00.324) 0:03:56.254 ****** 2026-02-03 03:42:44.587856 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:42:44.587861 | orchestrator | 2026-02-03 03:42:44.587865 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-03 03:42:44.587869 | orchestrator | Tuesday 03 February 2026 03:41:49 +0000 (0:00:00.940) 0:03:57.194 ****** 2026-02-03 03:42:44.587873 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:42:44.587876 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:42:44.587880 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:42:44.587884 | orchestrator | 2026-02-03 03:42:44.587888 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-03 03:42:44.587892 | orchestrator | Tuesday 03 February 2026 03:41:50 +0000 (0:00:01.587) 0:03:58.782 ****** 2026-02-03 03:42:44.587895 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:42:44.587899 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:42:44.587903 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:42:44.587907 | orchestrator | 2026-02-03 03:42:44.587911 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-03 03:42:44.587915 | orchestrator | Tuesday 03 February 2026 03:41:51 +0000 (0:00:01.163) 0:03:59.945 ****** 2026-02-03 03:42:44.587918 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:42:44.587922 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:42:44.587926 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:42:44.587930 | orchestrator | 2026-02-03 03:42:44.587933 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-03 03:42:44.587937 | orchestrator | Tuesday 03 February 2026 03:41:54 +0000 (0:00:02.178) 0:04:02.124 ****** 2026-02-03 03:42:44.587941 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:42:44.587945 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:42:44.587949 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:42:44.587953 | orchestrator | 2026-02-03 03:42:44.587957 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-03 03:42:44.587964 | orchestrator | Tuesday 03 February 2026 03:41:56 +0000 (0:00:02.084) 0:04:04.209 ****** 2026-02-03 03:42:44.587970 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:42:44.587975 | orchestrator | 2026-02-03 03:42:44.587981 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-03 03:42:44.587987 | orchestrator | Tuesday 03 February 2026 03:41:56 +0000 (0:00:00.602) 0:04:04.811 ****** 2026-02-03 03:42:44.588005 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-03 03:42:44.588012 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:42:44.588018 | orchestrator | 2026-02-03 03:42:44.588024 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-03 03:42:44.588043 | orchestrator | Tuesday 03 February 2026 03:42:18 +0000 (0:00:22.091) 0:04:26.902 ****** 2026-02-03 03:42:44.588049 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:42:44.588055 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:42:44.588061 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:42:44.588067 | orchestrator | 2026-02-03 03:42:44.588073 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-03 03:42:44.588079 | orchestrator | Tuesday 03 February 2026 03:42:27 +0000 (0:00:08.789) 0:04:35.692 ****** 2026-02-03 03:42:44.588085 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:42:44.588091 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:42:44.588097 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:42:44.588110 | orchestrator | 2026-02-03 03:42:44.588117 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-03 03:42:44.588122 | orchestrator | Tuesday 03 February 2026 03:42:28 +0000 (0:00:00.359) 0:04:36.052 ****** 2026-02-03 03:42:44.588128 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce6a4dc1c910d1f6152601bab1cefcdd0e98c0e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-03 03:42:44.588135 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce6a4dc1c910d1f6152601bab1cefcdd0e98c0e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-03 03:42:44.588142 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce6a4dc1c910d1f6152601bab1cefcdd0e98c0e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-03 03:42:44.588148 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce6a4dc1c910d1f6152601bab1cefcdd0e98c0e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-03 03:42:44.588153 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce6a4dc1c910d1f6152601bab1cefcdd0e98c0e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-03 03:42:44.588158 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1ce6a4dc1c910d1f6152601bab1cefcdd0e98c0e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1ce6a4dc1c910d1f6152601bab1cefcdd0e98c0e'}])  2026-02-03 03:42:44.588164 | orchestrator | 2026-02-03 03:42:44.588169 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-03 03:42:44.588174 | orchestrator | Tuesday 03 February 2026 03:42:42 +0000 (0:00:14.563) 0:04:50.615 ****** 2026-02-03 03:42:44.588178 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:42:44.588183 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:42:44.588187 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:42:44.588191 | orchestrator | 2026-02-03 03:42:44.588196 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-03 03:42:44.588200 | orchestrator | Tuesday 03 February 2026 03:42:43 +0000 (0:00:00.390) 0:04:51.006 ****** 2026-02-03 03:42:44.588205 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:42:44.588209 | orchestrator | 2026-02-03 03:42:44.588213 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-03 03:42:44.588218 | orchestrator | Tuesday 03 February 2026 03:42:43 +0000 (0:00:00.545) 0:04:51.552 ****** 2026-02-03 03:42:44.588223 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:42:44.588227 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:42:44.588232 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:42:44.588237 | orchestrator | 2026-02-03 03:42:44.588241 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-03 03:42:44.588249 | orchestrator | Tuesday 03 February 2026 03:42:44 +0000 (0:00:00.639) 0:04:52.191 ****** 2026-02-03 03:42:44.588256 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:42:44.588260 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:42:44.588264 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:42:44.588268 | orchestrator | 2026-02-03 03:42:44.588276 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-03 03:43:10.561265 | orchestrator | Tuesday 03 February 2026 03:42:44 +0000 (0:00:00.379) 0:04:52.571 ****** 2026-02-03 03:43:10.561370 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 03:43:10.561388 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 03:43:10.561396 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 03:43:10.561403 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.561417 | orchestrator | 2026-02-03 03:43:10.561425 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-03 03:43:10.561432 | orchestrator | Tuesday 03 February 2026 03:42:45 +0000 (0:00:00.649) 0:04:53.220 ****** 2026-02-03 03:43:10.561439 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.561447 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:43:10.561453 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:43:10.561459 | orchestrator | 2026-02-03 03:43:10.561466 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-03 03:43:10.561473 | orchestrator | 2026-02-03 03:43:10.561480 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 03:43:10.561486 | orchestrator | Tuesday 03 February 2026 03:42:46 +0000 (0:00:00.886) 0:04:54.107 ****** 2026-02-03 03:43:10.561494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:43:10.561502 | orchestrator | 2026-02-03 03:43:10.561508 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 03:43:10.561515 | orchestrator | Tuesday 03 February 2026 03:42:46 +0000 (0:00:00.566) 0:04:54.673 ****** 2026-02-03 03:43:10.561521 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:43:10.561528 | orchestrator | 2026-02-03 03:43:10.561534 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 03:43:10.561541 | orchestrator | Tuesday 03 February 2026 03:42:47 +0000 (0:00:00.834) 0:04:55.508 ****** 2026-02-03 03:43:10.561547 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.561554 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:43:10.561559 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:43:10.561565 | orchestrator | 2026-02-03 03:43:10.561571 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 03:43:10.561577 | orchestrator | Tuesday 03 February 2026 03:42:48 +0000 (0:00:00.779) 0:04:56.287 ****** 2026-02-03 03:43:10.561583 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.561590 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.561597 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.561603 | orchestrator | 2026-02-03 03:43:10.561609 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 03:43:10.561616 | orchestrator | Tuesday 03 February 2026 03:42:48 +0000 (0:00:00.312) 0:04:56.600 ****** 2026-02-03 03:43:10.561622 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.561628 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.561635 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.561641 | orchestrator | 2026-02-03 03:43:10.561647 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 03:43:10.561654 | orchestrator | Tuesday 03 February 2026 03:42:48 +0000 (0:00:00.343) 0:04:56.943 ****** 2026-02-03 03:43:10.561700 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.561707 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.561768 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.561775 | orchestrator | 2026-02-03 03:43:10.561782 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 03:43:10.561857 | orchestrator | Tuesday 03 February 2026 03:42:49 +0000 (0:00:00.598) 0:04:57.542 ****** 2026-02-03 03:43:10.561865 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.561872 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:43:10.561878 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:43:10.561885 | orchestrator | 2026-02-03 03:43:10.561892 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 03:43:10.561899 | orchestrator | Tuesday 03 February 2026 03:42:50 +0000 (0:00:00.763) 0:04:58.305 ****** 2026-02-03 03:43:10.561906 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.561913 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.561920 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.561927 | orchestrator | 2026-02-03 03:43:10.561934 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 03:43:10.561941 | orchestrator | Tuesday 03 February 2026 03:42:50 +0000 (0:00:00.337) 0:04:58.643 ****** 2026-02-03 03:43:10.561948 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.561954 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.561961 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.561968 | orchestrator | 2026-02-03 03:43:10.561974 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 03:43:10.561981 | orchestrator | Tuesday 03 February 2026 03:42:50 +0000 (0:00:00.344) 0:04:58.988 ****** 2026-02-03 03:43:10.561989 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.561995 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:43:10.562002 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:43:10.562009 | orchestrator | 2026-02-03 03:43:10.562061 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 03:43:10.562068 | orchestrator | Tuesday 03 February 2026 03:42:52 +0000 (0:00:01.054) 0:05:00.042 ****** 2026-02-03 03:43:10.562074 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.562081 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:43:10.562087 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:43:10.562094 | orchestrator | 2026-02-03 03:43:10.562101 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 03:43:10.562108 | orchestrator | Tuesday 03 February 2026 03:42:52 +0000 (0:00:00.788) 0:05:00.830 ****** 2026-02-03 03:43:10.562115 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.562121 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.562141 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.562148 | orchestrator | 2026-02-03 03:43:10.562154 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 03:43:10.562177 | orchestrator | Tuesday 03 February 2026 03:42:53 +0000 (0:00:00.344) 0:05:01.175 ****** 2026-02-03 03:43:10.562185 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.562191 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:43:10.562197 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:43:10.562203 | orchestrator | 2026-02-03 03:43:10.562210 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 03:43:10.562216 | orchestrator | Tuesday 03 February 2026 03:42:53 +0000 (0:00:00.351) 0:05:01.526 ****** 2026-02-03 03:43:10.562221 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.562228 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.562234 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.562239 | orchestrator | 2026-02-03 03:43:10.562244 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 03:43:10.562250 | orchestrator | Tuesday 03 February 2026 03:42:54 +0000 (0:00:00.587) 0:05:02.114 ****** 2026-02-03 03:43:10.562257 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.562262 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.562268 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.562275 | orchestrator | 2026-02-03 03:43:10.562291 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 03:43:10.562297 | orchestrator | Tuesday 03 February 2026 03:42:54 +0000 (0:00:00.344) 0:05:02.458 ****** 2026-02-03 03:43:10.562303 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.562309 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.562316 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.562322 | orchestrator | 2026-02-03 03:43:10.562329 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 03:43:10.562334 | orchestrator | Tuesday 03 February 2026 03:42:54 +0000 (0:00:00.369) 0:05:02.828 ****** 2026-02-03 03:43:10.562340 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.562346 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.562351 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.562357 | orchestrator | 2026-02-03 03:43:10.562363 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 03:43:10.562369 | orchestrator | Tuesday 03 February 2026 03:42:55 +0000 (0:00:00.329) 0:05:03.158 ****** 2026-02-03 03:43:10.562374 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.562380 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.562386 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.562392 | orchestrator | 2026-02-03 03:43:10.562398 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 03:43:10.562404 | orchestrator | Tuesday 03 February 2026 03:42:55 +0000 (0:00:00.583) 0:05:03.741 ****** 2026-02-03 03:43:10.562409 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.562415 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:43:10.562421 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:43:10.562427 | orchestrator | 2026-02-03 03:43:10.562433 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 03:43:10.562441 | orchestrator | Tuesday 03 February 2026 03:42:56 +0000 (0:00:00.375) 0:05:04.117 ****** 2026-02-03 03:43:10.562447 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.562453 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:43:10.562460 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:43:10.562466 | orchestrator | 2026-02-03 03:43:10.562472 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 03:43:10.562478 | orchestrator | Tuesday 03 February 2026 03:42:56 +0000 (0:00:00.350) 0:05:04.468 ****** 2026-02-03 03:43:10.562484 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.562490 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:43:10.562496 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:43:10.562501 | orchestrator | 2026-02-03 03:43:10.562507 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-03 03:43:10.562513 | orchestrator | Tuesday 03 February 2026 03:42:57 +0000 (0:00:00.826) 0:05:05.295 ****** 2026-02-03 03:43:10.562520 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 03:43:10.562527 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:43:10.562534 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:43:10.562540 | orchestrator | 2026-02-03 03:43:10.562546 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-03 03:43:10.562552 | orchestrator | Tuesday 03 February 2026 03:42:58 +0000 (0:00:00.708) 0:05:06.003 ****** 2026-02-03 03:43:10.562557 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:43:10.562564 | orchestrator | 2026-02-03 03:43:10.562570 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-03 03:43:10.562576 | orchestrator | Tuesday 03 February 2026 03:42:58 +0000 (0:00:00.591) 0:05:06.595 ****** 2026-02-03 03:43:10.562582 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:43:10.562588 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:43:10.562594 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:43:10.562599 | orchestrator | 2026-02-03 03:43:10.562605 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-03 03:43:10.562620 | orchestrator | Tuesday 03 February 2026 03:42:59 +0000 (0:00:00.706) 0:05:07.301 ****** 2026-02-03 03:43:10.562626 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:43:10.562632 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:43:10.562638 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:43:10.562644 | orchestrator | 2026-02-03 03:43:10.562650 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-03 03:43:10.562656 | orchestrator | Tuesday 03 February 2026 03:42:59 +0000 (0:00:00.621) 0:05:07.923 ****** 2026-02-03 03:43:10.562685 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-03 03:43:10.562693 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-03 03:43:10.562699 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-03 03:43:10.562704 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-03 03:43:10.562710 | orchestrator | 2026-02-03 03:43:10.562722 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-03 03:43:10.562728 | orchestrator | Tuesday 03 February 2026 03:43:10 +0000 (0:00:10.243) 0:05:18.167 ****** 2026-02-03 03:43:10.562734 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:43:10.562751 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:44:13.118525 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:44:13.118621 | orchestrator | 2026-02-03 03:44:13.118684 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-03 03:44:13.118693 | orchestrator | Tuesday 03 February 2026 03:43:10 +0000 (0:00:00.388) 0:05:18.555 ****** 2026-02-03 03:44:13.118701 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-03 03:44:13.118708 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-03 03:44:13.118714 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-03 03:44:13.118721 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-03 03:44:13.118728 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:44:13.118735 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:44:13.118742 | orchestrator | 2026-02-03 03:44:13.118748 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-03 03:44:13.118755 | orchestrator | Tuesday 03 February 2026 03:43:12 +0000 (0:00:02.296) 0:05:20.852 ****** 2026-02-03 03:44:13.118761 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-03 03:44:13.118768 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-03 03:44:13.118776 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-03 03:44:13.118782 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-03 03:44:13.118789 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-03 03:44:13.118796 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-03 03:44:13.118802 | orchestrator | 2026-02-03 03:44:13.118808 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-03 03:44:13.118815 | orchestrator | Tuesday 03 February 2026 03:43:14 +0000 (0:00:01.582) 0:05:22.434 ****** 2026-02-03 03:44:13.118822 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:44:13.118828 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:44:13.118835 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:44:13.118842 | orchestrator | 2026-02-03 03:44:13.118849 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-03 03:44:13.118855 | orchestrator | Tuesday 03 February 2026 03:43:15 +0000 (0:00:00.711) 0:05:23.146 ****** 2026-02-03 03:44:13.118862 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:44:13.118868 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:44:13.118875 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:44:13.118881 | orchestrator | 2026-02-03 03:44:13.118887 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-03 03:44:13.118894 | orchestrator | Tuesday 03 February 2026 03:43:15 +0000 (0:00:00.330) 0:05:23.477 ****** 2026-02-03 03:44:13.118924 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:44:13.118931 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:44:13.118938 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:44:13.118944 | orchestrator | 2026-02-03 03:44:13.118950 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-03 03:44:13.118957 | orchestrator | Tuesday 03 February 2026 03:43:15 +0000 (0:00:00.338) 0:05:23.816 ****** 2026-02-03 03:44:13.118964 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:44:13.118970 | orchestrator | 2026-02-03 03:44:13.118977 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-03 03:44:13.118984 | orchestrator | Tuesday 03 February 2026 03:43:16 +0000 (0:00:00.874) 0:05:24.690 ****** 2026-02-03 03:44:13.118990 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:44:13.118996 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:44:13.119003 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:44:13.119009 | orchestrator | 2026-02-03 03:44:13.119016 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-03 03:44:13.119022 | orchestrator | Tuesday 03 February 2026 03:43:17 +0000 (0:00:00.367) 0:05:25.058 ****** 2026-02-03 03:44:13.119028 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:44:13.119034 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:44:13.119040 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:44:13.119047 | orchestrator | 2026-02-03 03:44:13.119053 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-03 03:44:13.119059 | orchestrator | Tuesday 03 February 2026 03:43:17 +0000 (0:00:00.348) 0:05:25.406 ****** 2026-02-03 03:44:13.119066 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:44:13.119073 | orchestrator | 2026-02-03 03:44:13.119079 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-03 03:44:13.119086 | orchestrator | Tuesday 03 February 2026 03:43:18 +0000 (0:00:00.838) 0:05:26.244 ****** 2026-02-03 03:44:13.119093 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:44:13.119100 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:44:13.119106 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:44:13.119113 | orchestrator | 2026-02-03 03:44:13.119119 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-03 03:44:13.119126 | orchestrator | Tuesday 03 February 2026 03:43:19 +0000 (0:00:01.319) 0:05:27.564 ****** 2026-02-03 03:44:13.119132 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:44:13.119139 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:44:13.119145 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:44:13.119152 | orchestrator | 2026-02-03 03:44:13.119158 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-03 03:44:13.119165 | orchestrator | Tuesday 03 February 2026 03:43:20 +0000 (0:00:01.147) 0:05:28.712 ****** 2026-02-03 03:44:13.119172 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:44:13.119178 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:44:13.119185 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:44:13.119191 | orchestrator | 2026-02-03 03:44:13.119198 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-03 03:44:13.119217 | orchestrator | Tuesday 03 February 2026 03:43:22 +0000 (0:00:02.254) 0:05:30.966 ****** 2026-02-03 03:44:13.119223 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:44:13.119230 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:44:13.119236 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:44:13.119243 | orchestrator | 2026-02-03 03:44:13.119264 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-03 03:44:13.119271 | orchestrator | Tuesday 03 February 2026 03:43:25 +0000 (0:00:02.133) 0:05:33.100 ****** 2026-02-03 03:44:13.119277 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:44:13.119283 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:44:13.119289 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-03 03:44:13.119302 | orchestrator | 2026-02-03 03:44:13.119309 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-03 03:44:13.119315 | orchestrator | Tuesday 03 February 2026 03:43:25 +0000 (0:00:00.472) 0:05:33.573 ****** 2026-02-03 03:44:13.119321 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-03 03:44:13.119328 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-03 03:44:13.119334 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-03 03:44:13.119341 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-03 03:44:13.119347 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-03 03:44:13.119354 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:44:13.119360 | orchestrator | 2026-02-03 03:44:13.119367 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-03 03:44:13.119373 | orchestrator | Tuesday 03 February 2026 03:43:55 +0000 (0:00:30.103) 0:06:03.677 ****** 2026-02-03 03:44:13.119379 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:44:13.119386 | orchestrator | 2026-02-03 03:44:13.119391 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-03 03:44:13.119398 | orchestrator | Tuesday 03 February 2026 03:43:57 +0000 (0:00:01.333) 0:06:05.010 ****** 2026-02-03 03:44:13.119412 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:44:13.119420 | orchestrator | 2026-02-03 03:44:13.119426 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-03 03:44:13.119433 | orchestrator | Tuesday 03 February 2026 03:43:57 +0000 (0:00:00.307) 0:06:05.318 ****** 2026-02-03 03:44:13.119439 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:44:13.119456 | orchestrator | 2026-02-03 03:44:13.119463 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-03 03:44:13.119469 | orchestrator | Tuesday 03 February 2026 03:43:57 +0000 (0:00:00.433) 0:06:05.752 ****** 2026-02-03 03:44:13.119475 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-03 03:44:13.119481 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-03 03:44:13.119488 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-03 03:44:13.119494 | orchestrator | 2026-02-03 03:44:13.119502 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-03 03:44:13.119508 | orchestrator | Tuesday 03 February 2026 03:44:04 +0000 (0:00:06.452) 0:06:12.205 ****** 2026-02-03 03:44:13.119515 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-03 03:44:13.119521 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-03 03:44:13.119527 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-03 03:44:13.119533 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-03 03:44:13.119540 | orchestrator | 2026-02-03 03:44:13.119546 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-03 03:44:13.119553 | orchestrator | Tuesday 03 February 2026 03:44:09 +0000 (0:00:05.005) 0:06:17.211 ****** 2026-02-03 03:44:13.119559 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:44:13.119565 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:44:13.119572 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:44:13.119578 | orchestrator | 2026-02-03 03:44:13.119584 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-03 03:44:13.119591 | orchestrator | Tuesday 03 February 2026 03:44:09 +0000 (0:00:00.668) 0:06:17.879 ****** 2026-02-03 03:44:13.119597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:44:13.119610 | orchestrator | 2026-02-03 03:44:13.119616 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-03 03:44:13.119622 | orchestrator | Tuesday 03 February 2026 03:44:10 +0000 (0:00:00.712) 0:06:18.592 ****** 2026-02-03 03:44:13.119628 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:44:13.119669 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:44:13.119676 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:44:13.119682 | orchestrator | 2026-02-03 03:44:13.119689 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-03 03:44:13.119695 | orchestrator | Tuesday 03 February 2026 03:44:10 +0000 (0:00:00.319) 0:06:18.912 ****** 2026-02-03 03:44:13.119701 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:44:13.119707 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:44:13.119714 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:44:13.119720 | orchestrator | 2026-02-03 03:44:13.119726 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-03 03:44:13.119732 | orchestrator | Tuesday 03 February 2026 03:44:12 +0000 (0:00:01.164) 0:06:20.076 ****** 2026-02-03 03:44:13.119743 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 03:44:13.119750 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 03:44:13.119756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 03:44:13.119762 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:44:13.119769 | orchestrator | 2026-02-03 03:44:13.119780 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-03 03:44:29.907200 | orchestrator | Tuesday 03 February 2026 03:44:13 +0000 (0:00:01.025) 0:06:21.102 ****** 2026-02-03 03:44:29.907321 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:44:29.907339 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:44:29.907351 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:44:29.907362 | orchestrator | 2026-02-03 03:44:29.907376 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-03 03:44:29.907388 | orchestrator | 2026-02-03 03:44:29.907400 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 03:44:29.907412 | orchestrator | Tuesday 03 February 2026 03:44:13 +0000 (0:00:00.583) 0:06:21.685 ****** 2026-02-03 03:44:29.907424 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:44:29.907437 | orchestrator | 2026-02-03 03:44:29.907448 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 03:44:29.907459 | orchestrator | Tuesday 03 February 2026 03:44:14 +0000 (0:00:00.677) 0:06:22.363 ****** 2026-02-03 03:44:29.907471 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:44:29.907482 | orchestrator | 2026-02-03 03:44:29.907501 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 03:44:29.907529 | orchestrator | Tuesday 03 February 2026 03:44:14 +0000 (0:00:00.518) 0:06:22.881 ****** 2026-02-03 03:44:29.907550 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.907568 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.907586 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.907604 | orchestrator | 2026-02-03 03:44:29.907622 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 03:44:29.907685 | orchestrator | Tuesday 03 February 2026 03:44:15 +0000 (0:00:00.286) 0:06:23.168 ****** 2026-02-03 03:44:29.907705 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.907723 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.907734 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.907746 | orchestrator | 2026-02-03 03:44:29.907760 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 03:44:29.907773 | orchestrator | Tuesday 03 February 2026 03:44:16 +0000 (0:00:00.861) 0:06:24.030 ****** 2026-02-03 03:44:29.907814 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.907827 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.907839 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.907851 | orchestrator | 2026-02-03 03:44:29.907865 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 03:44:29.907878 | orchestrator | Tuesday 03 February 2026 03:44:16 +0000 (0:00:00.687) 0:06:24.717 ****** 2026-02-03 03:44:29.907891 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.907911 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.907930 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.907951 | orchestrator | 2026-02-03 03:44:29.907970 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 03:44:29.907991 | orchestrator | Tuesday 03 February 2026 03:44:17 +0000 (0:00:00.670) 0:06:25.387 ****** 2026-02-03 03:44:29.908009 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.908023 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.908036 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.908047 | orchestrator | 2026-02-03 03:44:29.908058 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 03:44:29.908069 | orchestrator | Tuesday 03 February 2026 03:44:17 +0000 (0:00:00.334) 0:06:25.721 ****** 2026-02-03 03:44:29.908079 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.908090 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.908101 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.908111 | orchestrator | 2026-02-03 03:44:29.908122 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 03:44:29.908133 | orchestrator | Tuesday 03 February 2026 03:44:18 +0000 (0:00:00.525) 0:06:26.247 ****** 2026-02-03 03:44:29.908144 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.908158 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.908176 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.908195 | orchestrator | 2026-02-03 03:44:29.908210 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 03:44:29.908221 | orchestrator | Tuesday 03 February 2026 03:44:18 +0000 (0:00:00.323) 0:06:26.570 ****** 2026-02-03 03:44:29.908232 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.908243 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.908254 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.908265 | orchestrator | 2026-02-03 03:44:29.908276 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 03:44:29.908288 | orchestrator | Tuesday 03 February 2026 03:44:19 +0000 (0:00:00.642) 0:06:27.213 ****** 2026-02-03 03:44:29.908299 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.908309 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.908320 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.908331 | orchestrator | 2026-02-03 03:44:29.908342 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 03:44:29.908353 | orchestrator | Tuesday 03 February 2026 03:44:19 +0000 (0:00:00.651) 0:06:27.864 ****** 2026-02-03 03:44:29.908364 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.908375 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.908386 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.908397 | orchestrator | 2026-02-03 03:44:29.908408 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 03:44:29.908419 | orchestrator | Tuesday 03 February 2026 03:44:20 +0000 (0:00:00.479) 0:06:28.343 ****** 2026-02-03 03:44:29.908430 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.908441 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.908452 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.908463 | orchestrator | 2026-02-03 03:44:29.908490 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 03:44:29.908501 | orchestrator | Tuesday 03 February 2026 03:44:20 +0000 (0:00:00.309) 0:06:28.653 ****** 2026-02-03 03:44:29.908519 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.908537 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.908566 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.908577 | orchestrator | 2026-02-03 03:44:29.908609 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 03:44:29.908622 | orchestrator | Tuesday 03 February 2026 03:44:21 +0000 (0:00:00.350) 0:06:29.003 ****** 2026-02-03 03:44:29.908664 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.908676 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.908687 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.908698 | orchestrator | 2026-02-03 03:44:29.908709 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 03:44:29.908720 | orchestrator | Tuesday 03 February 2026 03:44:21 +0000 (0:00:00.372) 0:06:29.376 ****** 2026-02-03 03:44:29.908731 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.908742 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.908753 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.908763 | orchestrator | 2026-02-03 03:44:29.908774 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 03:44:29.908785 | orchestrator | Tuesday 03 February 2026 03:44:21 +0000 (0:00:00.485) 0:06:29.861 ****** 2026-02-03 03:44:29.908796 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.908807 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.908818 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.908828 | orchestrator | 2026-02-03 03:44:29.908839 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 03:44:29.908850 | orchestrator | Tuesday 03 February 2026 03:44:22 +0000 (0:00:00.300) 0:06:30.162 ****** 2026-02-03 03:44:29.908861 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.908872 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.908882 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.908893 | orchestrator | 2026-02-03 03:44:29.908904 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 03:44:29.908915 | orchestrator | Tuesday 03 February 2026 03:44:22 +0000 (0:00:00.281) 0:06:30.444 ****** 2026-02-03 03:44:29.908925 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.908936 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.908947 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.908958 | orchestrator | 2026-02-03 03:44:29.908969 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 03:44:29.908980 | orchestrator | Tuesday 03 February 2026 03:44:22 +0000 (0:00:00.325) 0:06:30.770 ****** 2026-02-03 03:44:29.908990 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.909001 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.909012 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.909022 | orchestrator | 2026-02-03 03:44:29.909033 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 03:44:29.909044 | orchestrator | Tuesday 03 February 2026 03:44:23 +0000 (0:00:00.559) 0:06:31.329 ****** 2026-02-03 03:44:29.909055 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.909066 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.909076 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.909087 | orchestrator | 2026-02-03 03:44:29.909098 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-03 03:44:29.909109 | orchestrator | Tuesday 03 February 2026 03:44:23 +0000 (0:00:00.559) 0:06:31.889 ****** 2026-02-03 03:44:29.909120 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.909131 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.909142 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.909152 | orchestrator | 2026-02-03 03:44:29.909163 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-03 03:44:29.909180 | orchestrator | Tuesday 03 February 2026 03:44:24 +0000 (0:00:00.298) 0:06:32.188 ****** 2026-02-03 03:44:29.909199 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 03:44:29.909219 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:44:29.909253 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:44:29.909273 | orchestrator | 2026-02-03 03:44:29.909293 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-03 03:44:29.909305 | orchestrator | Tuesday 03 February 2026 03:44:25 +0000 (0:00:01.012) 0:06:33.200 ****** 2026-02-03 03:44:29.909318 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:44:29.909337 | orchestrator | 2026-02-03 03:44:29.909357 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-03 03:44:29.909375 | orchestrator | Tuesday 03 February 2026 03:44:25 +0000 (0:00:00.562) 0:06:33.762 ****** 2026-02-03 03:44:29.909394 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.909415 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.909426 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.909437 | orchestrator | 2026-02-03 03:44:29.909448 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-03 03:44:29.909460 | orchestrator | Tuesday 03 February 2026 03:44:26 +0000 (0:00:00.334) 0:06:34.096 ****** 2026-02-03 03:44:29.909470 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:44:29.909481 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:44:29.909492 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:44:29.909502 | orchestrator | 2026-02-03 03:44:29.909514 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-03 03:44:29.909524 | orchestrator | Tuesday 03 February 2026 03:44:26 +0000 (0:00:00.593) 0:06:34.690 ****** 2026-02-03 03:44:29.909535 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.909546 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.909557 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.909567 | orchestrator | 2026-02-03 03:44:29.909578 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-03 03:44:29.909589 | orchestrator | Tuesday 03 February 2026 03:44:27 +0000 (0:00:00.668) 0:06:35.359 ****** 2026-02-03 03:44:29.909600 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:44:29.909617 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:44:29.909671 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:44:29.909690 | orchestrator | 2026-02-03 03:44:29.909709 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-03 03:44:29.909728 | orchestrator | Tuesday 03 February 2026 03:44:27 +0000 (0:00:00.353) 0:06:35.712 ****** 2026-02-03 03:44:29.909759 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-03 03:45:32.209526 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-03 03:45:32.209693 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-03 03:45:32.209716 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-03 03:45:32.209729 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-03 03:45:32.209736 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-03 03:45:32.209743 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-03 03:45:32.209750 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-03 03:45:32.209757 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-03 03:45:32.209763 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-03 03:45:32.209770 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-03 03:45:32.209776 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-03 03:45:32.209783 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-03 03:45:32.209814 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-03 03:45:32.209840 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-03 03:45:32.209847 | orchestrator | 2026-02-03 03:45:32.209855 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-03 03:45:32.209862 | orchestrator | Tuesday 03 February 2026 03:44:29 +0000 (0:00:02.181) 0:06:37.894 ****** 2026-02-03 03:45:32.209868 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:45:32.209876 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:45:32.209882 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:45:32.209888 | orchestrator | 2026-02-03 03:45:32.209895 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-03 03:45:32.209901 | orchestrator | Tuesday 03 February 2026 03:44:30 +0000 (0:00:00.506) 0:06:38.401 ****** 2026-02-03 03:45:32.209908 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:45:32.209914 | orchestrator | 2026-02-03 03:45:32.209920 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-03 03:45:32.209926 | orchestrator | Tuesday 03 February 2026 03:44:30 +0000 (0:00:00.502) 0:06:38.903 ****** 2026-02-03 03:45:32.209933 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-03 03:45:32.209939 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-03 03:45:32.209945 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-03 03:45:32.209952 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-03 03:45:32.209959 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-03 03:45:32.209965 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-03 03:45:32.209971 | orchestrator | 2026-02-03 03:45:32.209977 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-03 03:45:32.209984 | orchestrator | Tuesday 03 February 2026 03:44:31 +0000 (0:00:00.954) 0:06:39.858 ****** 2026-02-03 03:45:32.209990 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:45:32.209996 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 03:45:32.210002 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 03:45:32.210009 | orchestrator | 2026-02-03 03:45:32.210054 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-03 03:45:32.210062 | orchestrator | Tuesday 03 February 2026 03:44:34 +0000 (0:00:02.150) 0:06:42.008 ****** 2026-02-03 03:45:32.210070 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-03 03:45:32.210078 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 03:45:32.210085 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:45:32.210092 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-03 03:45:32.210099 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-03 03:45:32.210106 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:45:32.210114 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-03 03:45:32.210121 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-03 03:45:32.210128 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:45:32.210136 | orchestrator | 2026-02-03 03:45:32.210143 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-03 03:45:32.210150 | orchestrator | Tuesday 03 February 2026 03:44:35 +0000 (0:00:01.367) 0:06:43.376 ****** 2026-02-03 03:45:32.210158 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:45:32.210165 | orchestrator | 2026-02-03 03:45:32.210172 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-03 03:45:32.210191 | orchestrator | Tuesday 03 February 2026 03:44:37 +0000 (0:00:01.991) 0:06:45.367 ****** 2026-02-03 03:45:32.210199 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:45:32.210212 | orchestrator | 2026-02-03 03:45:32.210219 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-03 03:45:32.210227 | orchestrator | Tuesday 03 February 2026 03:44:37 +0000 (0:00:00.560) 0:06:45.927 ****** 2026-02-03 03:45:32.210251 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}) 2026-02-03 03:45:32.210260 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'}) 2026-02-03 03:45:32.210268 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'}) 2026-02-03 03:45:32.210275 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}) 2026-02-03 03:45:32.210283 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'}) 2026-02-03 03:45:32.210290 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}) 2026-02-03 03:45:32.210297 | orchestrator | 2026-02-03 03:45:32.210305 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-03 03:45:32.210312 | orchestrator | Tuesday 03 February 2026 03:45:20 +0000 (0:00:42.643) 0:07:28.571 ****** 2026-02-03 03:45:32.210319 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:45:32.210326 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:45:32.210334 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:45:32.210342 | orchestrator | 2026-02-03 03:45:32.210349 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-03 03:45:32.210356 | orchestrator | Tuesday 03 February 2026 03:45:20 +0000 (0:00:00.328) 0:07:28.899 ****** 2026-02-03 03:45:32.210364 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:45:32.210371 | orchestrator | 2026-02-03 03:45:32.210378 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-03 03:45:32.210386 | orchestrator | Tuesday 03 February 2026 03:45:21 +0000 (0:00:00.553) 0:07:29.453 ****** 2026-02-03 03:45:32.210394 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:45:32.210401 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:45:32.210409 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:45:32.210416 | orchestrator | 2026-02-03 03:45:32.210422 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-03 03:45:32.210428 | orchestrator | Tuesday 03 February 2026 03:45:22 +0000 (0:00:00.968) 0:07:30.421 ****** 2026-02-03 03:45:32.210435 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:45:32.210441 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:45:32.210447 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:45:32.210454 | orchestrator | 2026-02-03 03:45:32.210460 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-03 03:45:32.210466 | orchestrator | Tuesday 03 February 2026 03:45:25 +0000 (0:00:02.582) 0:07:33.004 ****** 2026-02-03 03:45:32.210472 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:45:32.210479 | orchestrator | 2026-02-03 03:45:32.210485 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-03 03:45:32.210491 | orchestrator | Tuesday 03 February 2026 03:45:25 +0000 (0:00:00.559) 0:07:33.563 ****** 2026-02-03 03:45:32.210497 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:45:32.210504 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:45:32.210510 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:45:32.210516 | orchestrator | 2026-02-03 03:45:32.210527 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-03 03:45:32.210533 | orchestrator | Tuesday 03 February 2026 03:45:27 +0000 (0:00:01.534) 0:07:35.098 ****** 2026-02-03 03:45:32.210539 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:45:32.210545 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:45:32.210552 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:45:32.210558 | orchestrator | 2026-02-03 03:45:32.210564 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-03 03:45:32.210570 | orchestrator | Tuesday 03 February 2026 03:45:28 +0000 (0:00:01.171) 0:07:36.270 ****** 2026-02-03 03:45:32.210576 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:45:32.210583 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:45:32.210589 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:45:32.210595 | orchestrator | 2026-02-03 03:45:32.210621 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-03 03:45:32.210632 | orchestrator | Tuesday 03 February 2026 03:45:30 +0000 (0:00:01.789) 0:07:38.060 ****** 2026-02-03 03:45:32.210638 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:45:32.210645 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:45:32.210651 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:45:32.210657 | orchestrator | 2026-02-03 03:45:32.210663 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-03 03:45:32.210669 | orchestrator | Tuesday 03 February 2026 03:45:30 +0000 (0:00:00.355) 0:07:38.416 ****** 2026-02-03 03:45:32.210675 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:45:32.210682 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:45:32.210688 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:45:32.210694 | orchestrator | 2026-02-03 03:45:32.210700 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-03 03:45:32.210710 | orchestrator | Tuesday 03 February 2026 03:45:31 +0000 (0:00:00.682) 0:07:39.099 ****** 2026-02-03 03:45:32.210716 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-03 03:45:32.210722 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-03 03:45:32.210729 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-03 03:45:32.210735 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 03:45:32.210741 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-03 03:45:32.210747 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-02-03 03:45:32.210753 | orchestrator | 2026-02-03 03:45:32.210765 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-03 03:46:08.663847 | orchestrator | Tuesday 03 February 2026 03:45:32 +0000 (0:00:01.088) 0:07:40.187 ****** 2026-02-03 03:46:08.663981 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-03 03:46:08.664006 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-03 03:46:08.664018 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-03 03:46:08.664029 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-03 03:46:08.664039 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-03 03:46:08.664049 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-03 03:46:08.664065 | orchestrator | 2026-02-03 03:46:08.664084 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-03 03:46:08.664101 | orchestrator | Tuesday 03 February 2026 03:45:34 +0000 (0:00:02.190) 0:07:42.378 ****** 2026-02-03 03:46:08.664120 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-03 03:46:08.664137 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-03 03:46:08.664151 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-03 03:46:08.664161 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-03 03:46:08.664171 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-03 03:46:08.664181 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-03 03:46:08.664196 | orchestrator | 2026-02-03 03:46:08.664213 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-03 03:46:08.664228 | orchestrator | Tuesday 03 February 2026 03:45:37 +0000 (0:00:03.619) 0:07:45.997 ****** 2026-02-03 03:46:08.664277 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.664295 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:08.664311 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:46:08.664326 | orchestrator | 2026-02-03 03:46:08.664343 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-03 03:46:08.664359 | orchestrator | Tuesday 03 February 2026 03:45:40 +0000 (0:00:02.835) 0:07:48.833 ****** 2026-02-03 03:46:08.664378 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.664395 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:08.664481 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-03 03:46:08.664498 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:46:08.664510 | orchestrator | 2026-02-03 03:46:08.664522 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-03 03:46:08.664533 | orchestrator | Tuesday 03 February 2026 03:45:53 +0000 (0:00:12.543) 0:08:01.377 ****** 2026-02-03 03:46:08.664544 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.664556 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:08.664568 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:08.664580 | orchestrator | 2026-02-03 03:46:08.664618 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-03 03:46:08.664636 | orchestrator | Tuesday 03 February 2026 03:45:54 +0000 (0:00:01.139) 0:08:02.516 ****** 2026-02-03 03:46:08.664662 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.664693 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:08.664723 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:08.664739 | orchestrator | 2026-02-03 03:46:08.664755 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-03 03:46:08.664771 | orchestrator | Tuesday 03 February 2026 03:45:54 +0000 (0:00:00.381) 0:08:02.898 ****** 2026-02-03 03:46:08.664788 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:46:08.664804 | orchestrator | 2026-02-03 03:46:08.664820 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-03 03:46:08.664836 | orchestrator | Tuesday 03 February 2026 03:45:55 +0000 (0:00:00.821) 0:08:03.719 ****** 2026-02-03 03:46:08.664850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:46:08.664864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:46:08.664879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:46:08.664894 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.664909 | orchestrator | 2026-02-03 03:46:08.664925 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-03 03:46:08.664941 | orchestrator | Tuesday 03 February 2026 03:45:56 +0000 (0:00:00.420) 0:08:04.140 ****** 2026-02-03 03:46:08.664956 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.664972 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:08.664986 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:08.665001 | orchestrator | 2026-02-03 03:46:08.665017 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-03 03:46:08.665031 | orchestrator | Tuesday 03 February 2026 03:45:56 +0000 (0:00:00.367) 0:08:04.508 ****** 2026-02-03 03:46:08.665048 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665064 | orchestrator | 2026-02-03 03:46:08.665081 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-03 03:46:08.665097 | orchestrator | Tuesday 03 February 2026 03:45:56 +0000 (0:00:00.239) 0:08:04.747 ****** 2026-02-03 03:46:08.665113 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665130 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:08.665141 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:08.665151 | orchestrator | 2026-02-03 03:46:08.665161 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-03 03:46:08.665203 | orchestrator | Tuesday 03 February 2026 03:45:57 +0000 (0:00:00.382) 0:08:05.129 ****** 2026-02-03 03:46:08.665214 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665224 | orchestrator | 2026-02-03 03:46:08.665233 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-03 03:46:08.665243 | orchestrator | Tuesday 03 February 2026 03:45:57 +0000 (0:00:00.226) 0:08:05.355 ****** 2026-02-03 03:46:08.665253 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665262 | orchestrator | 2026-02-03 03:46:08.665272 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-03 03:46:08.665305 | orchestrator | Tuesday 03 February 2026 03:45:57 +0000 (0:00:00.237) 0:08:05.593 ****** 2026-02-03 03:46:08.665315 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665326 | orchestrator | 2026-02-03 03:46:08.665342 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-03 03:46:08.665358 | orchestrator | Tuesday 03 February 2026 03:45:57 +0000 (0:00:00.137) 0:08:05.730 ****** 2026-02-03 03:46:08.665374 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665389 | orchestrator | 2026-02-03 03:46:08.665406 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-03 03:46:08.665422 | orchestrator | Tuesday 03 February 2026 03:45:58 +0000 (0:00:00.864) 0:08:06.595 ****** 2026-02-03 03:46:08.665439 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665453 | orchestrator | 2026-02-03 03:46:08.665463 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-03 03:46:08.665473 | orchestrator | Tuesday 03 February 2026 03:45:58 +0000 (0:00:00.233) 0:08:06.828 ****** 2026-02-03 03:46:08.665483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:46:08.665493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:46:08.665502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:46:08.665512 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665522 | orchestrator | 2026-02-03 03:46:08.665531 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-03 03:46:08.665541 | orchestrator | Tuesday 03 February 2026 03:45:59 +0000 (0:00:00.422) 0:08:07.251 ****** 2026-02-03 03:46:08.665551 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665560 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:08.665570 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:08.665580 | orchestrator | 2026-02-03 03:46:08.665589 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-03 03:46:08.665666 | orchestrator | Tuesday 03 February 2026 03:45:59 +0000 (0:00:00.383) 0:08:07.634 ****** 2026-02-03 03:46:08.665677 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665686 | orchestrator | 2026-02-03 03:46:08.665696 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-03 03:46:08.665708 | orchestrator | Tuesday 03 February 2026 03:45:59 +0000 (0:00:00.243) 0:08:07.878 ****** 2026-02-03 03:46:08.665725 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665741 | orchestrator | 2026-02-03 03:46:08.665757 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-03 03:46:08.665775 | orchestrator | 2026-02-03 03:46:08.665792 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 03:46:08.665810 | orchestrator | Tuesday 03 February 2026 03:46:00 +0000 (0:00:01.039) 0:08:08.917 ****** 2026-02-03 03:46:08.665828 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:46:08.665846 | orchestrator | 2026-02-03 03:46:08.665857 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 03:46:08.665867 | orchestrator | Tuesday 03 February 2026 03:46:02 +0000 (0:00:01.279) 0:08:10.196 ****** 2026-02-03 03:46:08.665877 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:46:08.665897 | orchestrator | 2026-02-03 03:46:08.665907 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 03:46:08.665916 | orchestrator | Tuesday 03 February 2026 03:46:03 +0000 (0:00:01.391) 0:08:11.587 ****** 2026-02-03 03:46:08.665926 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.665936 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:08.665945 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:08.665955 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:46:08.665966 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:46:08.665975 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:46:08.665985 | orchestrator | 2026-02-03 03:46:08.665995 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 03:46:08.666004 | orchestrator | Tuesday 03 February 2026 03:46:04 +0000 (0:00:01.146) 0:08:12.734 ****** 2026-02-03 03:46:08.666159 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:08.666191 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:08.666207 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:08.666224 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:08.666234 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:08.666244 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:08.666253 | orchestrator | 2026-02-03 03:46:08.666263 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 03:46:08.666273 | orchestrator | Tuesday 03 February 2026 03:46:05 +0000 (0:00:01.012) 0:08:13.746 ****** 2026-02-03 03:46:08.666283 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:08.666293 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:08.666302 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:08.666312 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:08.666321 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:08.666331 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:08.666341 | orchestrator | 2026-02-03 03:46:08.666350 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 03:46:08.666360 | orchestrator | Tuesday 03 February 2026 03:46:06 +0000 (0:00:00.802) 0:08:14.548 ****** 2026-02-03 03:46:08.666370 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:08.666379 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:08.666389 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:08.666407 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:08.666417 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:08.666427 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:08.666436 | orchestrator | 2026-02-03 03:46:08.666449 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 03:46:08.666466 | orchestrator | Tuesday 03 February 2026 03:46:07 +0000 (0:00:01.015) 0:08:15.563 ****** 2026-02-03 03:46:08.666482 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:08.666497 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:08.666513 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:08.666543 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:46:39.691407 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:46:39.691527 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:46:39.691544 | orchestrator | 2026-02-03 03:46:39.691558 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 03:46:39.691571 | orchestrator | Tuesday 03 February 2026 03:46:08 +0000 (0:00:01.093) 0:08:16.657 ****** 2026-02-03 03:46:39.691636 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:39.691651 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:39.691662 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:39.691673 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:39.691684 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:39.691696 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:39.691710 | orchestrator | 2026-02-03 03:46:39.691728 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 03:46:39.691767 | orchestrator | Tuesday 03 February 2026 03:46:09 +0000 (0:00:00.900) 0:08:17.557 ****** 2026-02-03 03:46:39.691779 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:39.691790 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:39.691801 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:39.691812 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:39.691823 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:39.691834 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:39.691845 | orchestrator | 2026-02-03 03:46:39.691856 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 03:46:39.691868 | orchestrator | Tuesday 03 February 2026 03:46:10 +0000 (0:00:00.684) 0:08:18.241 ****** 2026-02-03 03:46:39.691879 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:39.691890 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:39.691901 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:39.691912 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:46:39.691923 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:46:39.691933 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:46:39.691944 | orchestrator | 2026-02-03 03:46:39.691955 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 03:46:39.691966 | orchestrator | Tuesday 03 February 2026 03:46:11 +0000 (0:00:01.430) 0:08:19.671 ****** 2026-02-03 03:46:39.691977 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:39.691988 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:39.691998 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:39.692009 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:46:39.692020 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:46:39.692031 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:46:39.692041 | orchestrator | 2026-02-03 03:46:39.692052 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 03:46:39.692063 | orchestrator | Tuesday 03 February 2026 03:46:12 +0000 (0:00:01.086) 0:08:20.758 ****** 2026-02-03 03:46:39.692075 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:39.692086 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:39.692097 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:39.692117 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:39.692163 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:39.692199 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:39.692217 | orchestrator | 2026-02-03 03:46:39.692234 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 03:46:39.692250 | orchestrator | Tuesday 03 February 2026 03:46:13 +0000 (0:00:00.887) 0:08:21.645 ****** 2026-02-03 03:46:39.692267 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:39.692283 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:39.692300 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:39.692319 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:46:39.692338 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:46:39.692356 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:46:39.692374 | orchestrator | 2026-02-03 03:46:39.692391 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 03:46:39.692403 | orchestrator | Tuesday 03 February 2026 03:46:14 +0000 (0:00:00.645) 0:08:22.291 ****** 2026-02-03 03:46:39.692414 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:39.692425 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:39.692436 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:39.692446 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:39.692457 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:39.692468 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:39.692478 | orchestrator | 2026-02-03 03:46:39.692489 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 03:46:39.692500 | orchestrator | Tuesday 03 February 2026 03:46:15 +0000 (0:00:00.965) 0:08:23.257 ****** 2026-02-03 03:46:39.692511 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:39.692522 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:39.692532 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:39.692555 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:39.692566 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:39.692577 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:39.692657 | orchestrator | 2026-02-03 03:46:39.692669 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 03:46:39.692680 | orchestrator | Tuesday 03 February 2026 03:46:15 +0000 (0:00:00.649) 0:08:23.906 ****** 2026-02-03 03:46:39.692691 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:39.692702 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:39.692713 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:39.692724 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:39.692743 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:39.692762 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:39.692780 | orchestrator | 2026-02-03 03:46:39.692800 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 03:46:39.692820 | orchestrator | Tuesday 03 February 2026 03:46:16 +0000 (0:00:00.935) 0:08:24.842 ****** 2026-02-03 03:46:39.692840 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:39.692852 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:39.692864 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:39.692874 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:39.692885 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:39.692896 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:39.692907 | orchestrator | 2026-02-03 03:46:39.692918 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 03:46:39.692929 | orchestrator | Tuesday 03 February 2026 03:46:17 +0000 (0:00:00.658) 0:08:25.500 ****** 2026-02-03 03:46:39.692940 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:39.692953 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:39.692997 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:39.693016 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:46:39.693033 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:46:39.693048 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:46:39.693064 | orchestrator | 2026-02-03 03:46:39.693081 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 03:46:39.693099 | orchestrator | Tuesday 03 February 2026 03:46:18 +0000 (0:00:00.928) 0:08:26.428 ****** 2026-02-03 03:46:39.693117 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:46:39.693137 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:46:39.693155 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:46:39.693174 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:46:39.693185 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:46:39.693196 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:46:39.693207 | orchestrator | 2026-02-03 03:46:39.693218 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 03:46:39.693230 | orchestrator | Tuesday 03 February 2026 03:46:19 +0000 (0:00:00.670) 0:08:27.099 ****** 2026-02-03 03:46:39.693241 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:39.693251 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:39.693262 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:39.693273 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:46:39.693284 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:46:39.693294 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:46:39.693305 | orchestrator | 2026-02-03 03:46:39.693317 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 03:46:39.693420 | orchestrator | Tuesday 03 February 2026 03:46:20 +0000 (0:00:00.982) 0:08:28.081 ****** 2026-02-03 03:46:39.693441 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:39.693452 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:39.693463 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:46:39.693474 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:46:39.693485 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:46:39.693495 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:46:39.693506 | orchestrator | 2026-02-03 03:46:39.693517 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-03 03:46:39.693538 | orchestrator | Tuesday 03 February 2026 03:46:21 +0000 (0:00:01.454) 0:08:29.536 ****** 2026-02-03 03:46:39.693549 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:46:39.693560 | orchestrator | 2026-02-03 03:46:39.693571 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-03 03:46:39.693607 | orchestrator | Tuesday 03 February 2026 03:46:25 +0000 (0:00:04.059) 0:08:33.595 ****** 2026-02-03 03:46:39.693620 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:46:39.693632 | orchestrator | 2026-02-03 03:46:39.693643 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-03 03:46:39.693654 | orchestrator | Tuesday 03 February 2026 03:46:27 +0000 (0:00:02.109) 0:08:35.705 ****** 2026-02-03 03:46:39.693665 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:46:39.693675 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:46:39.693686 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:46:39.693697 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:46:39.693707 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:46:39.693718 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:46:39.693729 | orchestrator | 2026-02-03 03:46:39.693740 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-03 03:46:39.693751 | orchestrator | Tuesday 03 February 2026 03:46:29 +0000 (0:00:01.820) 0:08:37.526 ****** 2026-02-03 03:46:39.693761 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:46:39.693772 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:46:39.693783 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:46:39.693793 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:46:39.693804 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:46:39.693814 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:46:39.693825 | orchestrator | 2026-02-03 03:46:39.693836 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-03 03:46:39.693847 | orchestrator | Tuesday 03 February 2026 03:46:30 +0000 (0:00:01.066) 0:08:38.592 ****** 2026-02-03 03:46:39.693858 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:46:39.693871 | orchestrator | 2026-02-03 03:46:39.693882 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-03 03:46:39.693892 | orchestrator | Tuesday 03 February 2026 03:46:32 +0000 (0:00:01.483) 0:08:40.076 ****** 2026-02-03 03:46:39.693903 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:46:39.693914 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:46:39.693925 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:46:39.693935 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:46:39.693946 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:46:39.693956 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:46:39.693967 | orchestrator | 2026-02-03 03:46:39.693978 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-03 03:46:39.693989 | orchestrator | Tuesday 03 February 2026 03:46:33 +0000 (0:00:01.843) 0:08:41.919 ****** 2026-02-03 03:46:39.694000 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:46:39.694010 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:46:39.694097 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:46:39.694109 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:46:39.694120 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:46:39.694130 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:46:39.694141 | orchestrator | 2026-02-03 03:46:39.694152 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-03 03:46:39.694169 | orchestrator | Tuesday 03 February 2026 03:46:37 +0000 (0:00:03.539) 0:08:45.459 ****** 2026-02-03 03:46:39.694181 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:46:39.694200 | orchestrator | 2026-02-03 03:46:39.694211 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-03 03:46:39.694222 | orchestrator | Tuesday 03 February 2026 03:46:38 +0000 (0:00:01.531) 0:08:46.990 ****** 2026-02-03 03:46:39.694233 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:46:39.694243 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:46:39.694268 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.910524 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:47:06.910630 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:47:06.910639 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:47:06.910646 | orchestrator | 2026-02-03 03:47:06.910652 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-03 03:47:06.910659 | orchestrator | Tuesday 03 February 2026 03:46:39 +0000 (0:00:00.974) 0:08:47.965 ****** 2026-02-03 03:47:06.910665 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:47:06.910671 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:47:06.910676 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:47:06.910681 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:47:06.910686 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:47:06.910739 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:47:06.910745 | orchestrator | 2026-02-03 03:47:06.910750 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-03 03:47:06.910755 | orchestrator | Tuesday 03 February 2026 03:46:42 +0000 (0:00:02.309) 0:08:50.275 ****** 2026-02-03 03:47:06.910760 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.910765 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.910770 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.910775 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:47:06.910780 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:47:06.910785 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:47:06.910790 | orchestrator | 2026-02-03 03:47:06.910795 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-03 03:47:06.910801 | orchestrator | 2026-02-03 03:47:06.910807 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 03:47:06.910812 | orchestrator | Tuesday 03 February 2026 03:46:43 +0000 (0:00:01.249) 0:08:51.524 ****** 2026-02-03 03:47:06.910817 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:47:06.910824 | orchestrator | 2026-02-03 03:47:06.910829 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 03:47:06.910834 | orchestrator | Tuesday 03 February 2026 03:46:44 +0000 (0:00:00.817) 0:08:52.341 ****** 2026-02-03 03:47:06.910839 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:47:06.910844 | orchestrator | 2026-02-03 03:47:06.910849 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 03:47:06.910854 | orchestrator | Tuesday 03 February 2026 03:46:44 +0000 (0:00:00.599) 0:08:52.940 ****** 2026-02-03 03:47:06.910859 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.910864 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.910869 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.910874 | orchestrator | 2026-02-03 03:47:06.910879 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 03:47:06.910884 | orchestrator | Tuesday 03 February 2026 03:46:45 +0000 (0:00:00.384) 0:08:53.325 ****** 2026-02-03 03:47:06.910889 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.910894 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.910899 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.910904 | orchestrator | 2026-02-03 03:47:06.910909 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 03:47:06.910914 | orchestrator | Tuesday 03 February 2026 03:46:46 +0000 (0:00:01.026) 0:08:54.351 ****** 2026-02-03 03:47:06.910919 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.910924 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.910946 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.910951 | orchestrator | 2026-02-03 03:47:06.910956 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 03:47:06.910961 | orchestrator | Tuesday 03 February 2026 03:46:47 +0000 (0:00:00.758) 0:08:55.110 ****** 2026-02-03 03:47:06.910966 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.910971 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.910976 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.910981 | orchestrator | 2026-02-03 03:47:06.910986 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 03:47:06.910991 | orchestrator | Tuesday 03 February 2026 03:46:47 +0000 (0:00:00.817) 0:08:55.928 ****** 2026-02-03 03:47:06.910996 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.911001 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.911006 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.911011 | orchestrator | 2026-02-03 03:47:06.911016 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 03:47:06.911021 | orchestrator | Tuesday 03 February 2026 03:46:48 +0000 (0:00:00.352) 0:08:56.280 ****** 2026-02-03 03:47:06.911026 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.911031 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.911036 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.911041 | orchestrator | 2026-02-03 03:47:06.911046 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 03:47:06.911051 | orchestrator | Tuesday 03 February 2026 03:46:48 +0000 (0:00:00.600) 0:08:56.880 ****** 2026-02-03 03:47:06.911056 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.911061 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.911065 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.911071 | orchestrator | 2026-02-03 03:47:06.911077 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 03:47:06.911083 | orchestrator | Tuesday 03 February 2026 03:46:49 +0000 (0:00:00.333) 0:08:57.214 ****** 2026-02-03 03:47:06.911090 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.911095 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.911101 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.911107 | orchestrator | 2026-02-03 03:47:06.911124 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 03:47:06.911130 | orchestrator | Tuesday 03 February 2026 03:46:49 +0000 (0:00:00.759) 0:08:57.973 ****** 2026-02-03 03:47:06.911136 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.911142 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.911147 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.911153 | orchestrator | 2026-02-03 03:47:06.911160 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 03:47:06.911166 | orchestrator | Tuesday 03 February 2026 03:46:50 +0000 (0:00:00.737) 0:08:58.711 ****** 2026-02-03 03:47:06.911187 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.911193 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.911199 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.911204 | orchestrator | 2026-02-03 03:47:06.911210 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 03:47:06.911216 | orchestrator | Tuesday 03 February 2026 03:46:51 +0000 (0:00:00.605) 0:08:59.316 ****** 2026-02-03 03:47:06.911222 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.911228 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.911233 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.911239 | orchestrator | 2026-02-03 03:47:06.911246 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 03:47:06.911251 | orchestrator | Tuesday 03 February 2026 03:46:51 +0000 (0:00:00.331) 0:08:59.648 ****** 2026-02-03 03:47:06.911257 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.911263 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.911268 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.911274 | orchestrator | 2026-02-03 03:47:06.911280 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 03:47:06.911291 | orchestrator | Tuesday 03 February 2026 03:46:52 +0000 (0:00:00.358) 0:09:00.007 ****** 2026-02-03 03:47:06.911297 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.911302 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.911308 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.911314 | orchestrator | 2026-02-03 03:47:06.911320 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 03:47:06.911326 | orchestrator | Tuesday 03 February 2026 03:46:52 +0000 (0:00:00.363) 0:09:00.370 ****** 2026-02-03 03:47:06.911332 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.911337 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.911343 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.911349 | orchestrator | 2026-02-03 03:47:06.911355 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 03:47:06.911361 | orchestrator | Tuesday 03 February 2026 03:46:53 +0000 (0:00:00.647) 0:09:01.018 ****** 2026-02-03 03:47:06.911367 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.911373 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.911379 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.911384 | orchestrator | 2026-02-03 03:47:06.911390 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 03:47:06.911396 | orchestrator | Tuesday 03 February 2026 03:46:53 +0000 (0:00:00.385) 0:09:01.403 ****** 2026-02-03 03:47:06.911402 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.911408 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.911414 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.911448 | orchestrator | 2026-02-03 03:47:06.911456 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 03:47:06.911464 | orchestrator | Tuesday 03 February 2026 03:46:53 +0000 (0:00:00.336) 0:09:01.740 ****** 2026-02-03 03:47:06.911472 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.911480 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.911487 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.911495 | orchestrator | 2026-02-03 03:47:06.911502 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 03:47:06.911510 | orchestrator | Tuesday 03 February 2026 03:46:54 +0000 (0:00:00.402) 0:09:02.142 ****** 2026-02-03 03:47:06.911517 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.911524 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.911531 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.911539 | orchestrator | 2026-02-03 03:47:06.911546 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 03:47:06.911554 | orchestrator | Tuesday 03 February 2026 03:46:54 +0000 (0:00:00.653) 0:09:02.795 ****** 2026-02-03 03:47:06.911561 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:06.911569 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:06.911598 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:06.911607 | orchestrator | 2026-02-03 03:47:06.911615 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-03 03:47:06.911623 | orchestrator | Tuesday 03 February 2026 03:46:55 +0000 (0:00:00.596) 0:09:03.392 ****** 2026-02-03 03:47:06.911631 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:06.911640 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:06.911649 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-03 03:47:06.911658 | orchestrator | 2026-02-03 03:47:06.911666 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-03 03:47:06.911671 | orchestrator | Tuesday 03 February 2026 03:46:55 +0000 (0:00:00.479) 0:09:03.871 ****** 2026-02-03 03:47:06.911676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:47:06.911681 | orchestrator | 2026-02-03 03:47:06.911686 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-03 03:47:06.911690 | orchestrator | Tuesday 03 February 2026 03:46:58 +0000 (0:00:02.778) 0:09:06.650 ****** 2026-02-03 03:47:06.911704 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-03 03:47:06.911711 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:06.911716 | orchestrator | 2026-02-03 03:47:06.911723 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-03 03:47:06.911736 | orchestrator | Tuesday 03 February 2026 03:46:58 +0000 (0:00:00.237) 0:09:06.888 ****** 2026-02-03 03:47:06.911747 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-03 03:47:06.911770 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-03 03:47:37.814800 | orchestrator | 2026-02-03 03:47:37.814919 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-03 03:47:37.814938 | orchestrator | Tuesday 03 February 2026 03:47:06 +0000 (0:00:08.008) 0:09:14.897 ****** 2026-02-03 03:47:37.814959 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 03:47:37.814979 | orchestrator | 2026-02-03 03:47:37.814999 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-03 03:47:37.815018 | orchestrator | Tuesday 03 February 2026 03:47:10 +0000 (0:00:03.677) 0:09:18.574 ****** 2026-02-03 03:47:37.815037 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:47:37.815057 | orchestrator | 2026-02-03 03:47:37.815078 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-03 03:47:37.815098 | orchestrator | Tuesday 03 February 2026 03:47:11 +0000 (0:00:00.582) 0:09:19.157 ****** 2026-02-03 03:47:37.815117 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-03 03:47:37.815136 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-03 03:47:37.815215 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-03 03:47:37.815236 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-03 03:47:37.815252 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-03 03:47:37.815270 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-03 03:47:37.815288 | orchestrator | 2026-02-03 03:47:37.815306 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-03 03:47:37.815325 | orchestrator | Tuesday 03 February 2026 03:47:12 +0000 (0:00:01.416) 0:09:20.574 ****** 2026-02-03 03:47:37.815344 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:47:37.815366 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 03:47:37.815387 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 03:47:37.815405 | orchestrator | 2026-02-03 03:47:37.815423 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-03 03:47:37.815442 | orchestrator | Tuesday 03 February 2026 03:47:14 +0000 (0:00:02.211) 0:09:22.785 ****** 2026-02-03 03:47:37.815463 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-03 03:47:37.815483 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 03:47:37.815500 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:47:37.815516 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-03 03:47:37.815534 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-03 03:47:37.815553 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:47:37.815637 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-03 03:47:37.815659 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-03 03:47:37.815676 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:47:37.815694 | orchestrator | 2026-02-03 03:47:37.815713 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-03 03:47:37.815731 | orchestrator | Tuesday 03 February 2026 03:47:16 +0000 (0:00:01.260) 0:09:24.045 ****** 2026-02-03 03:47:37.815749 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:47:37.815767 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:47:37.815786 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:47:37.815803 | orchestrator | 2026-02-03 03:47:37.815821 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-03 03:47:37.815839 | orchestrator | Tuesday 03 February 2026 03:47:18 +0000 (0:00:02.597) 0:09:26.643 ****** 2026-02-03 03:47:37.815857 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:37.815875 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:37.815893 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:37.815910 | orchestrator | 2026-02-03 03:47:37.815928 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-03 03:47:37.815962 | orchestrator | Tuesday 03 February 2026 03:47:19 +0000 (0:00:00.636) 0:09:27.280 ****** 2026-02-03 03:47:37.815981 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:47:37.815999 | orchestrator | 2026-02-03 03:47:37.816016 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-03 03:47:37.816035 | orchestrator | Tuesday 03 February 2026 03:47:19 +0000 (0:00:00.609) 0:09:27.889 ****** 2026-02-03 03:47:37.816053 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:47:37.816072 | orchestrator | 2026-02-03 03:47:37.816090 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-03 03:47:37.816107 | orchestrator | Tuesday 03 February 2026 03:47:20 +0000 (0:00:00.826) 0:09:28.716 ****** 2026-02-03 03:47:37.816126 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:47:37.816144 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:47:37.816161 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:47:37.816176 | orchestrator | 2026-02-03 03:47:37.816214 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-03 03:47:37.816234 | orchestrator | Tuesday 03 February 2026 03:47:22 +0000 (0:00:01.330) 0:09:30.047 ****** 2026-02-03 03:47:37.816253 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:47:37.816271 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:47:37.816291 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:47:37.816309 | orchestrator | 2026-02-03 03:47:37.816327 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-03 03:47:37.816345 | orchestrator | Tuesday 03 February 2026 03:47:23 +0000 (0:00:01.190) 0:09:31.237 ****** 2026-02-03 03:47:37.816363 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:47:37.816380 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:47:37.816397 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:47:37.816417 | orchestrator | 2026-02-03 03:47:37.816467 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-03 03:47:37.816488 | orchestrator | Tuesday 03 February 2026 03:47:25 +0000 (0:00:01.788) 0:09:33.025 ****** 2026-02-03 03:47:37.816506 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:47:37.816523 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:47:37.816542 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:47:37.816560 | orchestrator | 2026-02-03 03:47:37.816612 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-03 03:47:37.816629 | orchestrator | Tuesday 03 February 2026 03:47:27 +0000 (0:00:02.359) 0:09:35.385 ****** 2026-02-03 03:47:37.816647 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:37.816665 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:37.816700 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:37.816717 | orchestrator | 2026-02-03 03:47:37.816734 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-03 03:47:37.816750 | orchestrator | Tuesday 03 February 2026 03:47:28 +0000 (0:00:01.293) 0:09:36.679 ****** 2026-02-03 03:47:37.816767 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:47:37.816785 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:47:37.816802 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:47:37.816820 | orchestrator | 2026-02-03 03:47:37.816838 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-03 03:47:37.816855 | orchestrator | Tuesday 03 February 2026 03:47:29 +0000 (0:00:01.033) 0:09:37.713 ****** 2026-02-03 03:47:37.816873 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:47:37.816893 | orchestrator | 2026-02-03 03:47:37.816910 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-03 03:47:37.816928 | orchestrator | Tuesday 03 February 2026 03:47:30 +0000 (0:00:00.601) 0:09:38.314 ****** 2026-02-03 03:47:37.816947 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:37.816966 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:37.816984 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:37.817002 | orchestrator | 2026-02-03 03:47:37.817020 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-03 03:47:37.817039 | orchestrator | Tuesday 03 February 2026 03:47:30 +0000 (0:00:00.317) 0:09:38.632 ****** 2026-02-03 03:47:37.817057 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:47:37.817075 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:47:37.817091 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:47:37.817109 | orchestrator | 2026-02-03 03:47:37.817127 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-03 03:47:37.817145 | orchestrator | Tuesday 03 February 2026 03:47:32 +0000 (0:00:01.554) 0:09:40.186 ****** 2026-02-03 03:47:37.817163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:47:37.817181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:47:37.817201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:47:37.817219 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:37.817237 | orchestrator | 2026-02-03 03:47:37.817253 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-03 03:47:37.817265 | orchestrator | Tuesday 03 February 2026 03:47:32 +0000 (0:00:00.723) 0:09:40.910 ****** 2026-02-03 03:47:37.817275 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:37.817286 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:37.817297 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:37.817308 | orchestrator | 2026-02-03 03:47:37.817319 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-03 03:47:37.817330 | orchestrator | 2026-02-03 03:47:37.817341 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 03:47:37.817351 | orchestrator | Tuesday 03 February 2026 03:47:33 +0000 (0:00:00.648) 0:09:41.558 ****** 2026-02-03 03:47:37.817363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:47:37.817376 | orchestrator | 2026-02-03 03:47:37.817387 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 03:47:37.817398 | orchestrator | Tuesday 03 February 2026 03:47:34 +0000 (0:00:00.836) 0:09:42.395 ****** 2026-02-03 03:47:37.817410 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:47:37.817421 | orchestrator | 2026-02-03 03:47:37.817432 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 03:47:37.817443 | orchestrator | Tuesday 03 February 2026 03:47:34 +0000 (0:00:00.580) 0:09:42.975 ****** 2026-02-03 03:47:37.817453 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:47:37.817477 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:47:37.817488 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:47:37.817498 | orchestrator | 2026-02-03 03:47:37.817509 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 03:47:37.817520 | orchestrator | Tuesday 03 February 2026 03:47:35 +0000 (0:00:00.576) 0:09:43.552 ****** 2026-02-03 03:47:37.817531 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:37.817542 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:37.817553 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:37.817563 | orchestrator | 2026-02-03 03:47:37.817655 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 03:47:37.817678 | orchestrator | Tuesday 03 February 2026 03:47:36 +0000 (0:00:00.748) 0:09:44.301 ****** 2026-02-03 03:47:37.817689 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:37.817700 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:37.817711 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:37.817722 | orchestrator | 2026-02-03 03:47:37.817733 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 03:47:37.817743 | orchestrator | Tuesday 03 February 2026 03:47:37 +0000 (0:00:00.753) 0:09:45.054 ****** 2026-02-03 03:47:37.817753 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:47:37.817763 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:47:37.817772 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:47:37.817782 | orchestrator | 2026-02-03 03:47:37.817792 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 03:47:37.817816 | orchestrator | Tuesday 03 February 2026 03:47:37 +0000 (0:00:00.749) 0:09:45.803 ****** 2026-02-03 03:48:00.349772 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:00.349855 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:00.349861 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:00.349866 | orchestrator | 2026-02-03 03:48:00.349871 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 03:48:00.349877 | orchestrator | Tuesday 03 February 2026 03:47:38 +0000 (0:00:00.646) 0:09:46.449 ****** 2026-02-03 03:48:00.349882 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:00.349886 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:00.349891 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:00.349894 | orchestrator | 2026-02-03 03:48:00.349898 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 03:48:00.349902 | orchestrator | Tuesday 03 February 2026 03:47:38 +0000 (0:00:00.372) 0:09:46.822 ****** 2026-02-03 03:48:00.349906 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:00.349910 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:00.349914 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:00.349918 | orchestrator | 2026-02-03 03:48:00.349921 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 03:48:00.349925 | orchestrator | Tuesday 03 February 2026 03:47:39 +0000 (0:00:00.348) 0:09:47.170 ****** 2026-02-03 03:48:00.349929 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:48:00.349934 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:48:00.349938 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:48:00.349942 | orchestrator | 2026-02-03 03:48:00.349946 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 03:48:00.349950 | orchestrator | Tuesday 03 February 2026 03:47:39 +0000 (0:00:00.758) 0:09:47.929 ****** 2026-02-03 03:48:00.349954 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:48:00.349958 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:48:00.349962 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:48:00.349965 | orchestrator | 2026-02-03 03:48:00.349969 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 03:48:00.349973 | orchestrator | Tuesday 03 February 2026 03:47:41 +0000 (0:00:01.095) 0:09:49.024 ****** 2026-02-03 03:48:00.349977 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:00.349981 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:00.349985 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:00.350007 | orchestrator | 2026-02-03 03:48:00.350012 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 03:48:00.350049 | orchestrator | Tuesday 03 February 2026 03:47:41 +0000 (0:00:00.331) 0:09:49.356 ****** 2026-02-03 03:48:00.350054 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:00.350058 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:00.350062 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:00.350065 | orchestrator | 2026-02-03 03:48:00.350069 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 03:48:00.350073 | orchestrator | Tuesday 03 February 2026 03:47:41 +0000 (0:00:00.327) 0:09:49.684 ****** 2026-02-03 03:48:00.350077 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:48:00.350081 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:48:00.350085 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:48:00.350089 | orchestrator | 2026-02-03 03:48:00.350093 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 03:48:00.350096 | orchestrator | Tuesday 03 February 2026 03:47:42 +0000 (0:00:00.365) 0:09:50.049 ****** 2026-02-03 03:48:00.350100 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:48:00.350104 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:48:00.350108 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:48:00.350112 | orchestrator | 2026-02-03 03:48:00.350116 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 03:48:00.350120 | orchestrator | Tuesday 03 February 2026 03:47:42 +0000 (0:00:00.682) 0:09:50.732 ****** 2026-02-03 03:48:00.350124 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:48:00.350127 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:48:00.350131 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:48:00.350135 | orchestrator | 2026-02-03 03:48:00.350139 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 03:48:00.350143 | orchestrator | Tuesday 03 February 2026 03:47:43 +0000 (0:00:00.356) 0:09:51.088 ****** 2026-02-03 03:48:00.350147 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:00.350151 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:00.350154 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:00.350158 | orchestrator | 2026-02-03 03:48:00.350162 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 03:48:00.350166 | orchestrator | Tuesday 03 February 2026 03:47:43 +0000 (0:00:00.347) 0:09:51.436 ****** 2026-02-03 03:48:00.350170 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:00.350174 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:00.350178 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:00.350182 | orchestrator | 2026-02-03 03:48:00.350185 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 03:48:00.350189 | orchestrator | Tuesday 03 February 2026 03:47:43 +0000 (0:00:00.369) 0:09:51.806 ****** 2026-02-03 03:48:00.350193 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:00.350197 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:00.350201 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:00.350204 | orchestrator | 2026-02-03 03:48:00.350208 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 03:48:00.350212 | orchestrator | Tuesday 03 February 2026 03:47:44 +0000 (0:00:00.631) 0:09:52.437 ****** 2026-02-03 03:48:00.350227 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:48:00.350231 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:48:00.350235 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:48:00.350239 | orchestrator | 2026-02-03 03:48:00.350243 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 03:48:00.350246 | orchestrator | Tuesday 03 February 2026 03:47:44 +0000 (0:00:00.363) 0:09:52.801 ****** 2026-02-03 03:48:00.350250 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:48:00.350254 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:48:00.350258 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:48:00.350261 | orchestrator | 2026-02-03 03:48:00.350265 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-03 03:48:00.350274 | orchestrator | Tuesday 03 February 2026 03:47:45 +0000 (0:00:00.628) 0:09:53.430 ****** 2026-02-03 03:48:00.350289 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:48:00.350294 | orchestrator | 2026-02-03 03:48:00.350298 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-03 03:48:00.350302 | orchestrator | Tuesday 03 February 2026 03:47:46 +0000 (0:00:00.920) 0:09:54.351 ****** 2026-02-03 03:48:00.350306 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:48:00.350310 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 03:48:00.350314 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 03:48:00.350318 | orchestrator | 2026-02-03 03:48:00.350325 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-03 03:48:00.350331 | orchestrator | Tuesday 03 February 2026 03:47:48 +0000 (0:00:02.165) 0:09:56.516 ****** 2026-02-03 03:48:00.350339 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-03 03:48:00.350348 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 03:48:00.350356 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:48:00.350362 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-03 03:48:00.350368 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-03 03:48:00.350374 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:48:00.350379 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-03 03:48:00.350385 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-03 03:48:00.350392 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:48:00.350398 | orchestrator | 2026-02-03 03:48:00.350403 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-03 03:48:00.350410 | orchestrator | Tuesday 03 February 2026 03:47:49 +0000 (0:00:01.241) 0:09:57.758 ****** 2026-02-03 03:48:00.350416 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:00.350422 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:00.350429 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:00.350435 | orchestrator | 2026-02-03 03:48:00.350442 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-03 03:48:00.350449 | orchestrator | Tuesday 03 February 2026 03:47:50 +0000 (0:00:00.658) 0:09:58.416 ****** 2026-02-03 03:48:00.350456 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:48:00.350463 | orchestrator | 2026-02-03 03:48:00.350467 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-03 03:48:00.350487 | orchestrator | Tuesday 03 February 2026 03:47:50 +0000 (0:00:00.580) 0:09:58.996 ****** 2026-02-03 03:48:00.350498 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 03:48:00.350505 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 03:48:00.350516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 03:48:00.350521 | orchestrator | 2026-02-03 03:48:00.350531 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-03 03:48:00.350536 | orchestrator | Tuesday 03 February 2026 03:47:51 +0000 (0:00:00.834) 0:09:59.830 ****** 2026-02-03 03:48:00.350541 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:48:00.350545 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-03 03:48:00.350550 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:48:00.350560 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-03 03:48:00.350584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:48:00.350588 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-03 03:48:00.350593 | orchestrator | 2026-02-03 03:48:00.350597 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-03 03:48:00.350602 | orchestrator | Tuesday 03 February 2026 03:47:56 +0000 (0:00:04.927) 0:10:04.758 ****** 2026-02-03 03:48:00.350606 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:48:00.350611 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 03:48:00.350616 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:48:00.350624 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 03:48:00.350629 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:48:00.350633 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 03:48:00.350638 | orchestrator | 2026-02-03 03:48:00.350643 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-03 03:48:00.350647 | orchestrator | Tuesday 03 February 2026 03:47:59 +0000 (0:00:02.287) 0:10:07.045 ****** 2026-02-03 03:48:00.350652 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-03 03:48:00.350656 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:48:00.350661 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-03 03:48:00.350666 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:48:00.350671 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-03 03:48:00.350681 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:48:47.366647 | orchestrator | 2026-02-03 03:48:47.366763 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-03 03:48:47.366783 | orchestrator | Tuesday 03 February 2026 03:48:00 +0000 (0:00:01.291) 0:10:08.337 ****** 2026-02-03 03:48:47.366794 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-03 03:48:47.366805 | orchestrator | 2026-02-03 03:48:47.366817 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-03 03:48:47.366828 | orchestrator | Tuesday 03 February 2026 03:48:00 +0000 (0:00:00.249) 0:10:08.586 ****** 2026-02-03 03:48:47.366840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.366855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.366870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.366882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.366892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.366903 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:47.366915 | orchestrator | 2026-02-03 03:48:47.366926 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-03 03:48:47.366937 | orchestrator | Tuesday 03 February 2026 03:48:01 +0000 (0:00:00.948) 0:10:09.535 ****** 2026-02-03 03:48:47.366948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.366958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.366993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.367003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.367014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 03:48:47.367024 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:47.367035 | orchestrator | 2026-02-03 03:48:47.367046 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-03 03:48:47.367057 | orchestrator | Tuesday 03 February 2026 03:48:02 +0000 (0:00:00.947) 0:10:10.482 ****** 2026-02-03 03:48:47.367068 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 03:48:47.367081 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 03:48:47.367092 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 03:48:47.367103 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 03:48:47.367115 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 03:48:47.367126 | orchestrator | 2026-02-03 03:48:47.367138 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-03 03:48:47.367149 | orchestrator | Tuesday 03 February 2026 03:48:34 +0000 (0:00:31.548) 0:10:42.031 ****** 2026-02-03 03:48:47.367160 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:47.367172 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:47.367184 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:47.367195 | orchestrator | 2026-02-03 03:48:47.367207 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-03 03:48:47.367217 | orchestrator | Tuesday 03 February 2026 03:48:34 +0000 (0:00:00.663) 0:10:42.694 ****** 2026-02-03 03:48:47.367241 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:47.367253 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:47.367264 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:47.367274 | orchestrator | 2026-02-03 03:48:47.367284 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-03 03:48:47.367295 | orchestrator | Tuesday 03 February 2026 03:48:35 +0000 (0:00:00.349) 0:10:43.043 ****** 2026-02-03 03:48:47.367306 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:48:47.367317 | orchestrator | 2026-02-03 03:48:47.367327 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-03 03:48:47.367336 | orchestrator | Tuesday 03 February 2026 03:48:35 +0000 (0:00:00.858) 0:10:43.902 ****** 2026-02-03 03:48:47.367360 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:48:47.367367 | orchestrator | 2026-02-03 03:48:47.367374 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-03 03:48:47.367380 | orchestrator | Tuesday 03 February 2026 03:48:36 +0000 (0:00:00.609) 0:10:44.511 ****** 2026-02-03 03:48:47.367387 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:48:47.367393 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:48:47.367399 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:48:47.367405 | orchestrator | 2026-02-03 03:48:47.367412 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-03 03:48:47.367426 | orchestrator | Tuesday 03 February 2026 03:48:37 +0000 (0:00:01.330) 0:10:45.842 ****** 2026-02-03 03:48:47.367432 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:48:47.367438 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:48:47.367444 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:48:47.367450 | orchestrator | 2026-02-03 03:48:47.367457 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-03 03:48:47.367463 | orchestrator | Tuesday 03 February 2026 03:48:39 +0000 (0:00:01.458) 0:10:47.300 ****** 2026-02-03 03:48:47.367469 | orchestrator | changed: [testbed-node-3] 2026-02-03 03:48:47.367475 | orchestrator | changed: [testbed-node-4] 2026-02-03 03:48:47.367481 | orchestrator | changed: [testbed-node-5] 2026-02-03 03:48:47.367490 | orchestrator | 2026-02-03 03:48:47.367500 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-03 03:48:47.367510 | orchestrator | Tuesday 03 February 2026 03:48:41 +0000 (0:00:01.911) 0:10:49.211 ****** 2026-02-03 03:48:47.367521 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 03:48:47.367532 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 03:48:47.367542 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 03:48:47.367575 | orchestrator | 2026-02-03 03:48:47.367582 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-03 03:48:47.367589 | orchestrator | Tuesday 03 February 2026 03:48:44 +0000 (0:00:02.970) 0:10:52.181 ****** 2026-02-03 03:48:47.367595 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:47.367601 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:47.367607 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:47.367613 | orchestrator | 2026-02-03 03:48:47.367619 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-03 03:48:47.367626 | orchestrator | Tuesday 03 February 2026 03:48:44 +0000 (0:00:00.436) 0:10:52.618 ****** 2026-02-03 03:48:47.367632 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:48:47.367638 | orchestrator | 2026-02-03 03:48:47.367644 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-03 03:48:47.367650 | orchestrator | Tuesday 03 February 2026 03:48:45 +0000 (0:00:00.588) 0:10:53.207 ****** 2026-02-03 03:48:47.367657 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:48:47.367663 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:48:47.367669 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:48:47.367675 | orchestrator | 2026-02-03 03:48:47.367682 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-03 03:48:47.367688 | orchestrator | Tuesday 03 February 2026 03:48:45 +0000 (0:00:00.733) 0:10:53.940 ****** 2026-02-03 03:48:47.367694 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:47.367700 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:48:47.367706 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:48:47.367712 | orchestrator | 2026-02-03 03:48:47.367718 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-03 03:48:47.367725 | orchestrator | Tuesday 03 February 2026 03:48:46 +0000 (0:00:00.401) 0:10:54.342 ****** 2026-02-03 03:48:47.367731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:48:47.367738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:48:47.367744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:48:47.367750 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:48:47.367756 | orchestrator | 2026-02-03 03:48:47.367762 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-03 03:48:47.367769 | orchestrator | Tuesday 03 February 2026 03:48:47 +0000 (0:00:00.721) 0:10:55.064 ****** 2026-02-03 03:48:47.367781 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:48:47.367787 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:48:47.367793 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:48:47.367799 | orchestrator | 2026-02-03 03:48:47.367805 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:48:47.367812 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-03 03:48:47.367824 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-03 03:48:47.367831 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-03 03:48:47.367837 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-03 03:48:47.367849 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-03 03:48:47.993494 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-03 03:48:47.993652 | orchestrator | 2026-02-03 03:48:47.993670 | orchestrator | 2026-02-03 03:48:47.993680 | orchestrator | 2026-02-03 03:48:47.993687 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:48:47.993694 | orchestrator | Tuesday 03 February 2026 03:48:47 +0000 (0:00:00.283) 0:10:55.348 ****** 2026-02-03 03:48:47.993701 | orchestrator | =============================================================================== 2026-02-03 03:48:47.993708 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.16s 2026-02-03 03:48:47.993714 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.64s 2026-02-03 03:48:47.993721 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.55s 2026-02-03 03:48:47.993727 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.10s 2026-02-03 03:48:47.993733 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.09s 2026-02-03 03:48:47.993740 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.56s 2026-02-03 03:48:47.993746 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.54s 2026-02-03 03:48:47.993752 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.24s 2026-02-03 03:48:47.993758 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.79s 2026-02-03 03:48:47.993764 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.01s 2026-02-03 03:48:47.993770 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.53s 2026-02-03 03:48:47.993776 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.45s 2026-02-03 03:48:47.993783 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.01s 2026-02-03 03:48:47.993789 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.93s 2026-02-03 03:48:47.993796 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.06s 2026-02-03 03:48:47.993802 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.88s 2026-02-03 03:48:47.993808 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.68s 2026-02-03 03:48:47.993814 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.62s 2026-02-03 03:48:47.993821 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.54s 2026-02-03 03:48:47.993827 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.20s 2026-02-03 03:48:50.539010 | orchestrator | 2026-02-03 03:48:50 | INFO  | Task 4ae70ed5-eeb5-4b55-a39c-a0a033e115f5 (ceph-pools) was prepared for execution. 2026-02-03 03:48:50.539133 | orchestrator | 2026-02-03 03:48:50 | INFO  | It takes a moment until task 4ae70ed5-eeb5-4b55-a39c-a0a033e115f5 (ceph-pools) has been started and output is visible here. 2026-02-03 03:49:05.928729 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-03 03:49:05.928811 | orchestrator | 2.16.14 2026-02-03 03:49:05.928818 | orchestrator | 2026-02-03 03:49:05.928823 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-03 03:49:05.928828 | orchestrator | 2026-02-03 03:49:05.928833 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 03:49:05.928837 | orchestrator | Tuesday 03 February 2026 03:48:55 +0000 (0:00:00.688) 0:00:00.688 ****** 2026-02-03 03:49:05.928841 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:49:05.928846 | orchestrator | 2026-02-03 03:49:05.928850 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 03:49:05.928854 | orchestrator | Tuesday 03 February 2026 03:48:56 +0000 (0:00:00.719) 0:00:01.408 ****** 2026-02-03 03:49:05.928858 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:05.928862 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:05.928866 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:05.928870 | orchestrator | 2026-02-03 03:49:05.928874 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 03:49:05.928877 | orchestrator | Tuesday 03 February 2026 03:48:56 +0000 (0:00:00.819) 0:00:02.227 ****** 2026-02-03 03:49:05.928881 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:05.928885 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:05.928889 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:05.928893 | orchestrator | 2026-02-03 03:49:05.928897 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 03:49:05.928900 | orchestrator | Tuesday 03 February 2026 03:48:57 +0000 (0:00:00.331) 0:00:02.558 ****** 2026-02-03 03:49:05.928916 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:05.928920 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:05.928924 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:05.928927 | orchestrator | 2026-02-03 03:49:05.928931 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 03:49:05.928935 | orchestrator | Tuesday 03 February 2026 03:48:58 +0000 (0:00:00.957) 0:00:03.515 ****** 2026-02-03 03:49:05.928939 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:05.928943 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:05.928946 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:05.928950 | orchestrator | 2026-02-03 03:49:05.928954 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 03:49:05.928958 | orchestrator | Tuesday 03 February 2026 03:48:58 +0000 (0:00:00.337) 0:00:03.853 ****** 2026-02-03 03:49:05.928962 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:05.928965 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:05.928969 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:05.928973 | orchestrator | 2026-02-03 03:49:05.928977 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 03:49:05.928980 | orchestrator | Tuesday 03 February 2026 03:48:58 +0000 (0:00:00.355) 0:00:04.208 ****** 2026-02-03 03:49:05.928984 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:05.928988 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:05.928992 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:05.928996 | orchestrator | 2026-02-03 03:49:05.929000 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 03:49:05.929004 | orchestrator | Tuesday 03 February 2026 03:48:59 +0000 (0:00:00.335) 0:00:04.544 ****** 2026-02-03 03:49:05.929008 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:05.929012 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:05.929016 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:05.929032 | orchestrator | 2026-02-03 03:49:05.929036 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 03:49:05.929040 | orchestrator | Tuesday 03 February 2026 03:48:59 +0000 (0:00:00.581) 0:00:05.125 ****** 2026-02-03 03:49:05.929043 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:05.929047 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:05.929051 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:05.929055 | orchestrator | 2026-02-03 03:49:05.929059 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 03:49:05.929062 | orchestrator | Tuesday 03 February 2026 03:49:00 +0000 (0:00:00.329) 0:00:05.454 ****** 2026-02-03 03:49:05.929066 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 03:49:05.929070 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:49:05.929074 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:49:05.929078 | orchestrator | 2026-02-03 03:49:05.929082 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 03:49:05.929085 | orchestrator | Tuesday 03 February 2026 03:49:00 +0000 (0:00:00.717) 0:00:06.172 ****** 2026-02-03 03:49:05.929089 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:05.929093 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:05.929097 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:05.929100 | orchestrator | 2026-02-03 03:49:05.929104 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 03:49:05.929108 | orchestrator | Tuesday 03 February 2026 03:49:01 +0000 (0:00:00.471) 0:00:06.644 ****** 2026-02-03 03:49:05.929112 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 03:49:05.929116 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:49:05.929119 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:49:05.929123 | orchestrator | 2026-02-03 03:49:05.929127 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 03:49:05.929131 | orchestrator | Tuesday 03 February 2026 03:49:03 +0000 (0:00:02.271) 0:00:08.915 ****** 2026-02-03 03:49:05.929135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 03:49:05.929139 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 03:49:05.929143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 03:49:05.929147 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:05.929150 | orchestrator | 2026-02-03 03:49:05.929164 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 03:49:05.929168 | orchestrator | Tuesday 03 February 2026 03:49:04 +0000 (0:00:00.692) 0:00:09.607 ****** 2026-02-03 03:49:05.929174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 03:49:05.929180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 03:49:05.929184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 03:49:05.929188 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:05.929192 | orchestrator | 2026-02-03 03:49:05.929196 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 03:49:05.929199 | orchestrator | Tuesday 03 February 2026 03:49:05 +0000 (0:00:01.196) 0:00:10.803 ****** 2026-02-03 03:49:05.929212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:05.929218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:05.929222 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:05.929226 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:05.929230 | orchestrator | 2026-02-03 03:49:05.929234 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 03:49:05.929237 | orchestrator | Tuesday 03 February 2026 03:49:05 +0000 (0:00:00.195) 0:00:10.998 ****** 2026-02-03 03:49:05.929243 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f906be70bf4b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 03:49:02.226990', 'end': '2026-02-03 03:49:02.275861', 'delta': '0:00:00.048871', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f906be70bf4b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 03:49:05.929248 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9e707d2df2a9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 03:49:02.829128', 'end': '2026-02-03 03:49:02.882934', 'delta': '0:00:00.053806', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9e707d2df2a9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 03:49:05.929256 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7edf8d69a692', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 03:49:03.384882', 'end': '2026-02-03 03:49:03.439804', 'delta': '0:00:00.054922', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7edf8d69a692'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 03:49:13.328923 | orchestrator | 2026-02-03 03:49:13.329855 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 03:49:13.329907 | orchestrator | Tuesday 03 February 2026 03:49:05 +0000 (0:00:00.242) 0:00:11.240 ****** 2026-02-03 03:49:13.329913 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:13.329918 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:13.329922 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:13.329926 | orchestrator | 2026-02-03 03:49:13.329931 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 03:49:13.329935 | orchestrator | Tuesday 03 February 2026 03:49:06 +0000 (0:00:00.469) 0:00:11.710 ****** 2026-02-03 03:49:13.329949 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-03 03:49:13.329953 | orchestrator | 2026-02-03 03:49:13.329957 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 03:49:13.329961 | orchestrator | Tuesday 03 February 2026 03:49:08 +0000 (0:00:01.761) 0:00:13.472 ****** 2026-02-03 03:49:13.329965 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.329969 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.329973 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.329977 | orchestrator | 2026-02-03 03:49:13.329981 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 03:49:13.329985 | orchestrator | Tuesday 03 February 2026 03:49:08 +0000 (0:00:00.365) 0:00:13.837 ****** 2026-02-03 03:49:13.329989 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.329993 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.329996 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.330000 | orchestrator | 2026-02-03 03:49:13.330004 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 03:49:13.330008 | orchestrator | Tuesday 03 February 2026 03:49:09 +0000 (0:00:00.936) 0:00:14.774 ****** 2026-02-03 03:49:13.330012 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.330057 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.330061 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.330065 | orchestrator | 2026-02-03 03:49:13.330069 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 03:49:13.330072 | orchestrator | Tuesday 03 February 2026 03:49:09 +0000 (0:00:00.326) 0:00:15.100 ****** 2026-02-03 03:49:13.330076 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:13.330080 | orchestrator | 2026-02-03 03:49:13.330084 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 03:49:13.330088 | orchestrator | Tuesday 03 February 2026 03:49:09 +0000 (0:00:00.142) 0:00:15.242 ****** 2026-02-03 03:49:13.330092 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.330095 | orchestrator | 2026-02-03 03:49:13.330099 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 03:49:13.330103 | orchestrator | Tuesday 03 February 2026 03:49:10 +0000 (0:00:00.249) 0:00:15.492 ****** 2026-02-03 03:49:13.330107 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.330111 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.330115 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.330118 | orchestrator | 2026-02-03 03:49:13.330122 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 03:49:13.330126 | orchestrator | Tuesday 03 February 2026 03:49:10 +0000 (0:00:00.344) 0:00:15.836 ****** 2026-02-03 03:49:13.330130 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.330134 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.330137 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.330141 | orchestrator | 2026-02-03 03:49:13.330145 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 03:49:13.330149 | orchestrator | Tuesday 03 February 2026 03:49:10 +0000 (0:00:00.349) 0:00:16.186 ****** 2026-02-03 03:49:13.330158 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.330162 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.330166 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.330169 | orchestrator | 2026-02-03 03:49:13.330178 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 03:49:13.330185 | orchestrator | Tuesday 03 February 2026 03:49:11 +0000 (0:00:00.604) 0:00:16.790 ****** 2026-02-03 03:49:13.330189 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.330193 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.330197 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.330201 | orchestrator | 2026-02-03 03:49:13.330205 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 03:49:13.330208 | orchestrator | Tuesday 03 February 2026 03:49:11 +0000 (0:00:00.345) 0:00:17.136 ****** 2026-02-03 03:49:13.330213 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.330217 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.330220 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.330224 | orchestrator | 2026-02-03 03:49:13.330228 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 03:49:13.330232 | orchestrator | Tuesday 03 February 2026 03:49:12 +0000 (0:00:00.360) 0:00:17.497 ****** 2026-02-03 03:49:13.330236 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.330240 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.330243 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.330247 | orchestrator | 2026-02-03 03:49:13.330251 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 03:49:13.330256 | orchestrator | Tuesday 03 February 2026 03:49:12 +0000 (0:00:00.558) 0:00:18.056 ****** 2026-02-03 03:49:13.330260 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.330263 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.330267 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.330271 | orchestrator | 2026-02-03 03:49:13.330275 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 03:49:13.330279 | orchestrator | Tuesday 03 February 2026 03:49:13 +0000 (0:00:00.360) 0:00:18.416 ****** 2026-02-03 03:49:13.330300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.330311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.330316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.330322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.330329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.330333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.330337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.330341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.330345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.330354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.451014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.451109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.451118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.451136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.451144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.451149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.451160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.451165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.451169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.451174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.451178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.451185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.665706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.665798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.665831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.665845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.665889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.665922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.665947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.665963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.665979 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:13.665995 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:13.666009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.666086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.666102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.666131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.944046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.944142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.944152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.944160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.944167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.944174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-03 03:49:13.944204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.944221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.944230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.944238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.944247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-03 03:49:13.944255 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:13.944264 | orchestrator | 2026-02-03 03:49:13.944271 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 03:49:13.944280 | orchestrator | Tuesday 03 February 2026 03:49:13 +0000 (0:00:00.705) 0:00:19.121 ****** 2026-02-03 03:49:13.944296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.055801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.055889 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.055903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.055912 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.055922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.055932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.056011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.056025 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.056034 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.056044 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.056070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156123 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156131 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156145 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156205 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156228 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.156240 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.294254 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.294348 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:14.294366 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.294416 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.294448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.294459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.294471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.294486 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:14.294501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.294513 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.294530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416675 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416775 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416823 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416859 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:14.416957 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:25.439422 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:25.439544 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-03-02-24-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-03 03:49:25.439678 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:25.439693 | orchestrator | 2026-02-03 03:49:25.439707 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 03:49:25.439728 | orchestrator | Tuesday 03 February 2026 03:49:14 +0000 (0:00:00.712) 0:00:19.834 ****** 2026-02-03 03:49:25.439747 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:25.439766 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:25.439784 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:25.439803 | orchestrator | 2026-02-03 03:49:25.439821 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 03:49:25.439840 | orchestrator | Tuesday 03 February 2026 03:49:15 +0000 (0:00:00.988) 0:00:20.822 ****** 2026-02-03 03:49:25.439860 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:25.439878 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:25.439897 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:25.439916 | orchestrator | 2026-02-03 03:49:25.439935 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 03:49:25.439953 | orchestrator | Tuesday 03 February 2026 03:49:15 +0000 (0:00:00.347) 0:00:21.169 ****** 2026-02-03 03:49:25.439966 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:25.439995 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:25.440025 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:25.440038 | orchestrator | 2026-02-03 03:49:25.440060 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 03:49:25.440071 | orchestrator | Tuesday 03 February 2026 03:49:16 +0000 (0:00:00.681) 0:00:21.851 ****** 2026-02-03 03:49:25.440082 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.440093 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:25.440104 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:25.440115 | orchestrator | 2026-02-03 03:49:25.440125 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 03:49:25.440136 | orchestrator | Tuesday 03 February 2026 03:49:16 +0000 (0:00:00.318) 0:00:22.169 ****** 2026-02-03 03:49:25.440147 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.440158 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:25.440168 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:25.440179 | orchestrator | 2026-02-03 03:49:25.440190 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 03:49:25.440201 | orchestrator | Tuesday 03 February 2026 03:49:17 +0000 (0:00:00.751) 0:00:22.921 ****** 2026-02-03 03:49:25.440211 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.440222 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:25.440233 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:25.440244 | orchestrator | 2026-02-03 03:49:25.440255 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 03:49:25.440265 | orchestrator | Tuesday 03 February 2026 03:49:17 +0000 (0:00:00.355) 0:00:23.277 ****** 2026-02-03 03:49:25.440276 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-03 03:49:25.440288 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-03 03:49:25.440298 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-03 03:49:25.440309 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-03 03:49:25.440324 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-03 03:49:25.440365 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-03 03:49:25.440387 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-03 03:49:25.440405 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-03 03:49:25.440425 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-03 03:49:25.440443 | orchestrator | 2026-02-03 03:49:25.440461 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 03:49:25.440479 | orchestrator | Tuesday 03 February 2026 03:49:19 +0000 (0:00:01.121) 0:00:24.398 ****** 2026-02-03 03:49:25.440521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 03:49:25.440539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 03:49:25.440590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 03:49:25.440608 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.440624 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 03:49:25.440643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 03:49:25.440658 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 03:49:25.440675 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:25.440693 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-03 03:49:25.440711 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-03 03:49:25.440731 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-03 03:49:25.440750 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:25.440768 | orchestrator | 2026-02-03 03:49:25.440786 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 03:49:25.440797 | orchestrator | Tuesday 03 February 2026 03:49:19 +0000 (0:00:00.387) 0:00:24.785 ****** 2026-02-03 03:49:25.440809 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 03:49:25.440821 | orchestrator | 2026-02-03 03:49:25.440832 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 03:49:25.440845 | orchestrator | Tuesday 03 February 2026 03:49:20 +0000 (0:00:00.799) 0:00:25.585 ****** 2026-02-03 03:49:25.440856 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.440867 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:25.440878 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:25.440888 | orchestrator | 2026-02-03 03:49:25.440899 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 03:49:25.440911 | orchestrator | Tuesday 03 February 2026 03:49:20 +0000 (0:00:00.373) 0:00:25.958 ****** 2026-02-03 03:49:25.440922 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.440932 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:25.440943 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:25.440954 | orchestrator | 2026-02-03 03:49:25.440965 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 03:49:25.440976 | orchestrator | Tuesday 03 February 2026 03:49:21 +0000 (0:00:00.363) 0:00:26.322 ****** 2026-02-03 03:49:25.440987 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.440998 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:49:25.441009 | orchestrator | skipping: [testbed-node-5] 2026-02-03 03:49:25.441020 | orchestrator | 2026-02-03 03:49:25.441031 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 03:49:25.441043 | orchestrator | Tuesday 03 February 2026 03:49:21 +0000 (0:00:00.604) 0:00:26.927 ****** 2026-02-03 03:49:25.441054 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:25.441065 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:25.441076 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:25.441086 | orchestrator | 2026-02-03 03:49:25.441098 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 03:49:25.441109 | orchestrator | Tuesday 03 February 2026 03:49:22 +0000 (0:00:00.441) 0:00:27.369 ****** 2026-02-03 03:49:25.441132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:49:25.441150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:49:25.441162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:49:25.441173 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.441184 | orchestrator | 2026-02-03 03:49:25.441195 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 03:49:25.441209 | orchestrator | Tuesday 03 February 2026 03:49:22 +0000 (0:00:00.409) 0:00:27.778 ****** 2026-02-03 03:49:25.441227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:49:25.441257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:49:25.441276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:49:25.441295 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.441311 | orchestrator | 2026-02-03 03:49:25.441329 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 03:49:25.441348 | orchestrator | Tuesday 03 February 2026 03:49:22 +0000 (0:00:00.407) 0:00:28.185 ****** 2026-02-03 03:49:25.441366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 03:49:25.441385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 03:49:25.441417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 03:49:25.441430 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:49:25.441441 | orchestrator | 2026-02-03 03:49:25.441452 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 03:49:25.441463 | orchestrator | Tuesday 03 February 2026 03:49:23 +0000 (0:00:00.455) 0:00:28.641 ****** 2026-02-03 03:49:25.441474 | orchestrator | ok: [testbed-node-3] 2026-02-03 03:49:25.441485 | orchestrator | ok: [testbed-node-4] 2026-02-03 03:49:25.441495 | orchestrator | ok: [testbed-node-5] 2026-02-03 03:49:25.441506 | orchestrator | 2026-02-03 03:49:25.441518 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 03:49:25.441528 | orchestrator | Tuesday 03 February 2026 03:49:23 +0000 (0:00:00.392) 0:00:29.034 ****** 2026-02-03 03:49:25.441539 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 03:49:25.441594 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 03:49:25.441611 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 03:49:25.441622 | orchestrator | 2026-02-03 03:49:25.441633 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 03:49:25.441644 | orchestrator | Tuesday 03 February 2026 03:49:24 +0000 (0:00:00.841) 0:00:29.875 ****** 2026-02-03 03:49:25.441656 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 03:49:25.441683 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:51:05.664120 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:51:05.664262 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 03:51:05.664282 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 03:51:05.664296 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 03:51:05.664308 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 03:51:05.664320 | orchestrator | 2026-02-03 03:51:05.664333 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 03:51:05.664345 | orchestrator | Tuesday 03 February 2026 03:49:25 +0000 (0:00:00.873) 0:00:30.749 ****** 2026-02-03 03:51:05.664356 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 03:51:05.664367 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 03:51:05.664379 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 03:51:05.664415 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 03:51:05.664428 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 03:51:05.664444 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 03:51:05.664463 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 03:51:05.664481 | orchestrator | 2026-02-03 03:51:05.664500 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-03 03:51:05.664519 | orchestrator | Tuesday 03 February 2026 03:49:27 +0000 (0:00:01.717) 0:00:32.466 ****** 2026-02-03 03:51:05.664605 | orchestrator | skipping: [testbed-node-3] 2026-02-03 03:51:05.664625 | orchestrator | skipping: [testbed-node-4] 2026-02-03 03:51:05.664644 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-03 03:51:05.664660 | orchestrator | 2026-02-03 03:51:05.664676 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-03 03:51:05.664692 | orchestrator | Tuesday 03 February 2026 03:49:27 +0000 (0:00:00.441) 0:00:32.907 ****** 2026-02-03 03:51:05.664711 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-03 03:51:05.664730 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-03 03:51:05.664766 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-03 03:51:05.664782 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-03 03:51:05.664798 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-03 03:51:05.664815 | orchestrator | 2026-02-03 03:51:05.664832 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-03 03:51:05.664848 | orchestrator | Tuesday 03 February 2026 03:50:12 +0000 (0:00:45.381) 0:01:18.289 ****** 2026-02-03 03:51:05.664863 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.664878 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.664970 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.664992 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665011 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665028 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665047 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-03 03:51:05.665065 | orchestrator | 2026-02-03 03:51:05.665083 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-03 03:51:05.665102 | orchestrator | Tuesday 03 February 2026 03:50:36 +0000 (0:00:23.400) 0:01:41.689 ****** 2026-02-03 03:51:05.665169 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665191 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665209 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665224 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665237 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665256 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665274 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 03:51:05.665292 | orchestrator | 2026-02-03 03:51:05.665310 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-03 03:51:05.665327 | orchestrator | Tuesday 03 February 2026 03:50:47 +0000 (0:00:11.577) 0:01:53.267 ****** 2026-02-03 03:51:05.665343 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665359 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 03:51:05.665376 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 03:51:05.665392 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665408 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 03:51:05.665423 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 03:51:05.665441 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665457 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 03:51:05.665473 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 03:51:05.665489 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665507 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 03:51:05.665523 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 03:51:05.665571 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665590 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 03:51:05.665608 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 03:51:05.665625 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 03:51:05.665643 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 03:51:05.665662 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 03:51:05.665682 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-03 03:51:05.665702 | orchestrator | 2026-02-03 03:51:05.665733 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:51:05.665746 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-03 03:51:05.665759 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-03 03:51:05.665771 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-03 03:51:05.665782 | orchestrator | 2026-02-03 03:51:05.665794 | orchestrator | 2026-02-03 03:51:05.665805 | orchestrator | 2026-02-03 03:51:05.665816 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:51:05.665839 | orchestrator | Tuesday 03 February 2026 03:51:05 +0000 (0:00:17.694) 0:02:10.961 ****** 2026-02-03 03:51:05.665850 | orchestrator | =============================================================================== 2026-02-03 03:51:05.665861 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.38s 2026-02-03 03:51:05.665872 | orchestrator | generate keys ---------------------------------------------------------- 23.40s 2026-02-03 03:51:05.665883 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.69s 2026-02-03 03:51:05.665894 | orchestrator | get keys from monitors ------------------------------------------------- 11.58s 2026-02-03 03:51:05.665905 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.27s 2026-02-03 03:51:05.665916 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.76s 2026-02-03 03:51:05.665927 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.72s 2026-02-03 03:51:05.665939 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.20s 2026-02-03 03:51:05.665950 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.12s 2026-02-03 03:51:05.665961 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.99s 2026-02-03 03:51:05.665972 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.96s 2026-02-03 03:51:05.665983 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.94s 2026-02-03 03:51:05.665994 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.87s 2026-02-03 03:51:05.666082 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.84s 2026-02-03 03:51:06.041035 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.82s 2026-02-03 03:51:06.041130 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.80s 2026-02-03 03:51:06.041141 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.75s 2026-02-03 03:51:06.041148 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.72s 2026-02-03 03:51:06.041155 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2026-02-03 03:51:06.041163 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.71s 2026-02-03 03:51:08.560656 | orchestrator | 2026-02-03 03:51:08 | INFO  | Task 761c5176-aefe-44fa-a8bf-d70b12da3566 (copy-ceph-keys) was prepared for execution. 2026-02-03 03:51:08.560729 | orchestrator | 2026-02-03 03:51:08 | INFO  | It takes a moment until task 761c5176-aefe-44fa-a8bf-d70b12da3566 (copy-ceph-keys) has been started and output is visible here. 2026-02-03 03:51:50.176383 | orchestrator | 2026-02-03 03:51:50.176476 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-03 03:51:50.176486 | orchestrator | 2026-02-03 03:51:50.176493 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-03 03:51:50.176500 | orchestrator | Tuesday 03 February 2026 03:51:13 +0000 (0:00:00.187) 0:00:00.187 ****** 2026-02-03 03:51:50.176507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-03 03:51:50.176516 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.176588 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.176602 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-03 03:51:50.176612 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.176620 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-03 03:51:50.176629 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-03 03:51:50.176663 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-03 03:51:50.176673 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-03 03:51:50.176682 | orchestrator | 2026-02-03 03:51:50.176692 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-03 03:51:50.176702 | orchestrator | Tuesday 03 February 2026 03:51:17 +0000 (0:00:04.675) 0:00:04.862 ****** 2026-02-03 03:51:50.176726 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-03 03:51:50.176735 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.176744 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.176752 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-03 03:51:50.176761 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.176769 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-03 03:51:50.176778 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-03 03:51:50.176786 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-03 03:51:50.176795 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-03 03:51:50.176803 | orchestrator | 2026-02-03 03:51:50.176813 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-03 03:51:50.176822 | orchestrator | Tuesday 03 February 2026 03:51:22 +0000 (0:00:04.286) 0:00:09.149 ****** 2026-02-03 03:51:50.176832 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-03 03:51:50.176842 | orchestrator | 2026-02-03 03:51:50.176850 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-03 03:51:50.176860 | orchestrator | Tuesday 03 February 2026 03:51:23 +0000 (0:00:01.002) 0:00:10.152 ****** 2026-02-03 03:51:50.176870 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-03 03:51:50.176901 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.176912 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.176921 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-03 03:51:50.176931 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.176942 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-03 03:51:50.176951 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-03 03:51:50.176962 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-03 03:51:50.176971 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-03 03:51:50.176981 | orchestrator | 2026-02-03 03:51:50.176991 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-03 03:51:50.177000 | orchestrator | Tuesday 03 February 2026 03:51:37 +0000 (0:00:14.019) 0:00:24.172 ****** 2026-02-03 03:51:50.177010 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-03 03:51:50.177020 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-03 03:51:50.177031 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-03 03:51:50.177040 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-03 03:51:50.177084 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-03 03:51:50.177097 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-03 03:51:50.177106 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-03 03:51:50.177116 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-03 03:51:50.177127 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-03 03:51:50.177134 | orchestrator | 2026-02-03 03:51:50.177141 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-03 03:51:50.177148 | orchestrator | Tuesday 03 February 2026 03:51:42 +0000 (0:00:05.305) 0:00:29.477 ****** 2026-02-03 03:51:50.177155 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-03 03:51:50.177162 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.177169 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.177176 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-03 03:51:50.177183 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-03 03:51:50.177190 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-03 03:51:50.177197 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-03 03:51:50.177204 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-03 03:51:50.177211 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-03 03:51:50.177218 | orchestrator | 2026-02-03 03:51:50.177233 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:51:50.177240 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:51:50.177248 | orchestrator | 2026-02-03 03:51:50.177255 | orchestrator | 2026-02-03 03:51:50.177263 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:51:50.177270 | orchestrator | Tuesday 03 February 2026 03:51:49 +0000 (0:00:07.426) 0:00:36.904 ****** 2026-02-03 03:51:50.177277 | orchestrator | =============================================================================== 2026-02-03 03:51:50.177284 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.02s 2026-02-03 03:51:50.177291 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.43s 2026-02-03 03:51:50.177298 | orchestrator | Check if target directories exist --------------------------------------- 5.31s 2026-02-03 03:51:50.177305 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.68s 2026-02-03 03:51:50.177312 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.29s 2026-02-03 03:51:50.177318 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2026-02-03 03:52:02.783894 | orchestrator | 2026-02-03 03:52:02 | INFO  | Task 0de41b06-6e94-4593-b2c1-943481844306 (cephclient) was prepared for execution. 2026-02-03 03:52:02.783978 | orchestrator | 2026-02-03 03:52:02 | INFO  | It takes a moment until task 0de41b06-6e94-4593-b2c1-943481844306 (cephclient) has been started and output is visible here. 2026-02-03 03:53:05.708319 | orchestrator | 2026-02-03 03:53:05.708420 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-03 03:53:05.708434 | orchestrator | 2026-02-03 03:53:05.708443 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-03 03:53:05.708453 | orchestrator | Tuesday 03 February 2026 03:52:07 +0000 (0:00:00.248) 0:00:00.248 ****** 2026-02-03 03:53:05.708461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-03 03:53:05.708491 | orchestrator | 2026-02-03 03:53:05.708500 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-03 03:53:05.708508 | orchestrator | Tuesday 03 February 2026 03:52:07 +0000 (0:00:00.254) 0:00:00.503 ****** 2026-02-03 03:53:05.708568 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-03 03:53:05.708579 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-03 03:53:05.708588 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-03 03:53:05.708596 | orchestrator | 2026-02-03 03:53:05.708605 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-03 03:53:05.708613 | orchestrator | Tuesday 03 February 2026 03:52:08 +0000 (0:00:01.277) 0:00:01.780 ****** 2026-02-03 03:53:05.708621 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-03 03:53:05.708630 | orchestrator | 2026-02-03 03:53:05.708638 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-03 03:53:05.708646 | orchestrator | Tuesday 03 February 2026 03:52:10 +0000 (0:00:01.574) 0:00:03.355 ****** 2026-02-03 03:53:05.708654 | orchestrator | changed: [testbed-manager] 2026-02-03 03:53:05.708662 | orchestrator | 2026-02-03 03:53:05.708670 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-03 03:53:05.708678 | orchestrator | Tuesday 03 February 2026 03:52:11 +0000 (0:00:00.969) 0:00:04.325 ****** 2026-02-03 03:53:05.708686 | orchestrator | changed: [testbed-manager] 2026-02-03 03:53:05.708694 | orchestrator | 2026-02-03 03:53:05.708701 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-03 03:53:05.708709 | orchestrator | Tuesday 03 February 2026 03:52:12 +0000 (0:00:00.985) 0:00:05.310 ****** 2026-02-03 03:53:05.708717 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-03 03:53:05.708725 | orchestrator | ok: [testbed-manager] 2026-02-03 03:53:05.708733 | orchestrator | 2026-02-03 03:53:05.708741 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-03 03:53:05.708749 | orchestrator | Tuesday 03 February 2026 03:52:55 +0000 (0:00:42.778) 0:00:48.089 ****** 2026-02-03 03:53:05.708757 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-03 03:53:05.708766 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-03 03:53:05.708774 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-03 03:53:05.708782 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-03 03:53:05.708791 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-03 03:53:05.708799 | orchestrator | 2026-02-03 03:53:05.708807 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-03 03:53:05.708815 | orchestrator | Tuesday 03 February 2026 03:52:59 +0000 (0:00:04.327) 0:00:52.416 ****** 2026-02-03 03:53:05.708823 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-03 03:53:05.708831 | orchestrator | 2026-02-03 03:53:05.708839 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-03 03:53:05.708847 | orchestrator | Tuesday 03 February 2026 03:53:00 +0000 (0:00:00.570) 0:00:52.987 ****** 2026-02-03 03:53:05.708855 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:53:05.708863 | orchestrator | 2026-02-03 03:53:05.708873 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-03 03:53:05.708882 | orchestrator | Tuesday 03 February 2026 03:53:00 +0000 (0:00:00.149) 0:00:53.136 ****** 2026-02-03 03:53:05.708892 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:53:05.708901 | orchestrator | 2026-02-03 03:53:05.708924 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-03 03:53:05.708934 | orchestrator | Tuesday 03 February 2026 03:53:00 +0000 (0:00:00.608) 0:00:53.745 ****** 2026-02-03 03:53:05.708943 | orchestrator | changed: [testbed-manager] 2026-02-03 03:53:05.708966 | orchestrator | 2026-02-03 03:53:05.708976 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-03 03:53:05.708985 | orchestrator | Tuesday 03 February 2026 03:53:02 +0000 (0:00:01.539) 0:00:55.285 ****** 2026-02-03 03:53:05.708995 | orchestrator | changed: [testbed-manager] 2026-02-03 03:53:05.709003 | orchestrator | 2026-02-03 03:53:05.709013 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-03 03:53:05.709022 | orchestrator | Tuesday 03 February 2026 03:53:03 +0000 (0:00:00.738) 0:00:56.023 ****** 2026-02-03 03:53:05.709031 | orchestrator | changed: [testbed-manager] 2026-02-03 03:53:05.709040 | orchestrator | 2026-02-03 03:53:05.709050 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-03 03:53:05.709059 | orchestrator | Tuesday 03 February 2026 03:53:03 +0000 (0:00:00.663) 0:00:56.686 ****** 2026-02-03 03:53:05.709069 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-03 03:53:05.709078 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-03 03:53:05.709087 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-03 03:53:05.709097 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-03 03:53:05.709106 | orchestrator | 2026-02-03 03:53:05.709114 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:53:05.709122 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 03:53:05.709131 | orchestrator | 2026-02-03 03:53:05.709139 | orchestrator | 2026-02-03 03:53:05.709161 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:53:05.709169 | orchestrator | Tuesday 03 February 2026 03:53:05 +0000 (0:00:01.614) 0:00:58.301 ****** 2026-02-03 03:53:05.709177 | orchestrator | =============================================================================== 2026-02-03 03:53:05.709185 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.78s 2026-02-03 03:53:05.709193 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.33s 2026-02-03 03:53:05.709201 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.61s 2026-02-03 03:53:05.709209 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.57s 2026-02-03 03:53:05.709217 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.54s 2026-02-03 03:53:05.709225 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.28s 2026-02-03 03:53:05.709233 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.99s 2026-02-03 03:53:05.709241 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2026-02-03 03:53:05.709249 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2026-02-03 03:53:05.709257 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.66s 2026-02-03 03:53:05.709264 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.61s 2026-02-03 03:53:05.709272 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.57s 2026-02-03 03:53:05.709280 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-02-03 03:53:05.709288 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-02-03 03:53:08.187161 | orchestrator | 2026-02-03 03:53:08 | INFO  | Task 8e3daafc-021e-48da-9ab1-7b51e43e9775 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-03 03:53:08.187288 | orchestrator | 2026-02-03 03:53:08 | INFO  | It takes a moment until task 8e3daafc-021e-48da-9ab1-7b51e43e9775 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-03 03:54:30.531269 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-03 03:54:30.531390 | orchestrator | 2.16.14 2026-02-03 03:54:30.531409 | orchestrator | 2026-02-03 03:54:30.531423 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-03 03:54:30.531461 | orchestrator | 2026-02-03 03:54:30.531473 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-03 03:54:30.531485 | orchestrator | Tuesday 03 February 2026 03:53:12 +0000 (0:00:00.275) 0:00:00.275 ****** 2026-02-03 03:54:30.531496 | orchestrator | changed: [testbed-manager] 2026-02-03 03:54:30.531508 | orchestrator | 2026-02-03 03:54:30.531591 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-03 03:54:30.531603 | orchestrator | Tuesday 03 February 2026 03:53:14 +0000 (0:00:01.882) 0:00:02.157 ****** 2026-02-03 03:54:30.531614 | orchestrator | changed: [testbed-manager] 2026-02-03 03:54:30.531625 | orchestrator | 2026-02-03 03:54:30.531636 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-03 03:54:30.531647 | orchestrator | Tuesday 03 February 2026 03:53:15 +0000 (0:00:01.096) 0:00:03.254 ****** 2026-02-03 03:54:30.531658 | orchestrator | changed: [testbed-manager] 2026-02-03 03:54:30.531669 | orchestrator | 2026-02-03 03:54:30.531680 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-03 03:54:30.531692 | orchestrator | Tuesday 03 February 2026 03:53:16 +0000 (0:00:01.104) 0:00:04.359 ****** 2026-02-03 03:54:30.531703 | orchestrator | changed: [testbed-manager] 2026-02-03 03:54:30.531713 | orchestrator | 2026-02-03 03:54:30.531725 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-03 03:54:30.531736 | orchestrator | Tuesday 03 February 2026 03:53:18 +0000 (0:00:01.251) 0:00:05.610 ****** 2026-02-03 03:54:30.531746 | orchestrator | changed: [testbed-manager] 2026-02-03 03:54:30.531764 | orchestrator | 2026-02-03 03:54:30.531827 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-03 03:54:30.531852 | orchestrator | Tuesday 03 February 2026 03:53:19 +0000 (0:00:01.130) 0:00:06.741 ****** 2026-02-03 03:54:30.531872 | orchestrator | changed: [testbed-manager] 2026-02-03 03:54:30.531892 | orchestrator | 2026-02-03 03:54:30.531913 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-03 03:54:30.531934 | orchestrator | Tuesday 03 February 2026 03:53:20 +0000 (0:00:01.129) 0:00:07.870 ****** 2026-02-03 03:54:30.531953 | orchestrator | changed: [testbed-manager] 2026-02-03 03:54:30.531972 | orchestrator | 2026-02-03 03:54:30.531992 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-03 03:54:30.532012 | orchestrator | Tuesday 03 February 2026 03:53:22 +0000 (0:00:02.084) 0:00:09.954 ****** 2026-02-03 03:54:30.532031 | orchestrator | changed: [testbed-manager] 2026-02-03 03:54:30.532051 | orchestrator | 2026-02-03 03:54:30.532070 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-03 03:54:30.532083 | orchestrator | Tuesday 03 February 2026 03:53:23 +0000 (0:00:01.237) 0:00:11.191 ****** 2026-02-03 03:54:30.532096 | orchestrator | changed: [testbed-manager] 2026-02-03 03:54:30.532109 | orchestrator | 2026-02-03 03:54:30.532121 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-03 03:54:30.532132 | orchestrator | Tuesday 03 February 2026 03:54:04 +0000 (0:00:40.931) 0:00:52.123 ****** 2026-02-03 03:54:30.532143 | orchestrator | skipping: [testbed-manager] 2026-02-03 03:54:30.532154 | orchestrator | 2026-02-03 03:54:30.532165 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-03 03:54:30.532176 | orchestrator | 2026-02-03 03:54:30.532187 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-03 03:54:30.532197 | orchestrator | Tuesday 03 February 2026 03:54:04 +0000 (0:00:00.179) 0:00:52.302 ****** 2026-02-03 03:54:30.532208 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:54:30.532219 | orchestrator | 2026-02-03 03:54:30.532230 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-03 03:54:30.532241 | orchestrator | 2026-02-03 03:54:30.532251 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-03 03:54:30.532262 | orchestrator | Tuesday 03 February 2026 03:54:06 +0000 (0:00:01.780) 0:00:54.083 ****** 2026-02-03 03:54:30.532301 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:54:30.532326 | orchestrator | 2026-02-03 03:54:30.532343 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-03 03:54:30.532361 | orchestrator | 2026-02-03 03:54:30.532378 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-03 03:54:30.532396 | orchestrator | Tuesday 03 February 2026 03:54:17 +0000 (0:00:11.247) 0:01:05.331 ****** 2026-02-03 03:54:30.532414 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:54:30.532431 | orchestrator | 2026-02-03 03:54:30.532448 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:54:30.532466 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 03:54:30.532484 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:54:30.532502 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:54:30.532546 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 03:54:30.532567 | orchestrator | 2026-02-03 03:54:30.532585 | orchestrator | 2026-02-03 03:54:30.532604 | orchestrator | 2026-02-03 03:54:30.532617 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:54:30.532627 | orchestrator | Tuesday 03 February 2026 03:54:30 +0000 (0:00:12.332) 0:01:17.663 ****** 2026-02-03 03:54:30.532638 | orchestrator | =============================================================================== 2026-02-03 03:54:30.532649 | orchestrator | Create admin user ------------------------------------------------------ 40.93s 2026-02-03 03:54:30.532683 | orchestrator | Restart ceph manager service ------------------------------------------- 25.36s 2026-02-03 03:54:30.532694 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2026-02-03 03:54:30.532705 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.88s 2026-02-03 03:54:30.532716 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.25s 2026-02-03 03:54:30.532727 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.24s 2026-02-03 03:54:30.532738 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.13s 2026-02-03 03:54:30.532748 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.13s 2026-02-03 03:54:30.532759 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.10s 2026-02-03 03:54:30.532770 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.10s 2026-02-03 03:54:30.532781 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-02-03 03:54:30.906397 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-03 03:54:33.064208 | orchestrator | 2026-02-03 03:54:33 | INFO  | Task d45176cf-2999-476f-8248-c389853401aa (keystone) was prepared for execution. 2026-02-03 03:54:33.066311 | orchestrator | 2026-02-03 03:54:33 | INFO  | It takes a moment until task d45176cf-2999-476f-8248-c389853401aa (keystone) has been started and output is visible here. 2026-02-03 03:54:40.499891 | orchestrator | 2026-02-03 03:54:40.500013 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:54:40.500025 | orchestrator | 2026-02-03 03:54:40.500048 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:54:40.500056 | orchestrator | Tuesday 03 February 2026 03:54:37 +0000 (0:00:00.280) 0:00:00.280 ****** 2026-02-03 03:54:40.500062 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:54:40.500070 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:54:40.500076 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:54:40.500082 | orchestrator | 2026-02-03 03:54:40.500110 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:54:40.500116 | orchestrator | Tuesday 03 February 2026 03:54:37 +0000 (0:00:00.340) 0:00:00.621 ****** 2026-02-03 03:54:40.500122 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-03 03:54:40.500128 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-03 03:54:40.500134 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-03 03:54:40.500141 | orchestrator | 2026-02-03 03:54:40.500146 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-03 03:54:40.500151 | orchestrator | 2026-02-03 03:54:40.500157 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-03 03:54:40.500163 | orchestrator | Tuesday 03 February 2026 03:54:38 +0000 (0:00:00.472) 0:00:01.094 ****** 2026-02-03 03:54:40.500170 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:54:40.500177 | orchestrator | 2026-02-03 03:54:40.500183 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-03 03:54:40.500189 | orchestrator | Tuesday 03 February 2026 03:54:38 +0000 (0:00:00.567) 0:00:01.661 ****** 2026-02-03 03:54:40.500201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:40.500211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:40.500244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:40.500261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:54:40.500269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:54:40.500275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:54:40.500281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:40.500288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:40.500294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:40.500306 | orchestrator | 2026-02-03 03:54:40.500314 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-03 03:54:40.500326 | orchestrator | Tuesday 03 February 2026 03:54:40 +0000 (0:00:01.729) 0:00:03.390 ****** 2026-02-03 03:54:46.278889 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:54:46.278989 | orchestrator | 2026-02-03 03:54:46.279020 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-03 03:54:46.279031 | orchestrator | Tuesday 03 February 2026 03:54:40 +0000 (0:00:00.342) 0:00:03.733 ****** 2026-02-03 03:54:46.279040 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:54:46.279049 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:54:46.279058 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:54:46.279067 | orchestrator | 2026-02-03 03:54:46.279076 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-03 03:54:46.279085 | orchestrator | Tuesday 03 February 2026 03:54:41 +0000 (0:00:00.363) 0:00:04.097 ****** 2026-02-03 03:54:46.279094 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 03:54:46.279102 | orchestrator | 2026-02-03 03:54:46.279111 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-03 03:54:46.279120 | orchestrator | Tuesday 03 February 2026 03:54:41 +0000 (0:00:00.789) 0:00:04.887 ****** 2026-02-03 03:54:46.279129 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:54:46.279138 | orchestrator | 2026-02-03 03:54:46.279147 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-03 03:54:46.279156 | orchestrator | Tuesday 03 February 2026 03:54:42 +0000 (0:00:00.544) 0:00:05.431 ****** 2026-02-03 03:54:46.279169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:46.279182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:46.279193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:46.279245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:54:46.279259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:54:46.279268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:54:46.279277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:46.279287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:46.279302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:46.279312 | orchestrator | 2026-02-03 03:54:46.279321 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-03 03:54:46.279330 | orchestrator | Tuesday 03 February 2026 03:54:45 +0000 (0:00:03.121) 0:00:08.553 ****** 2026-02-03 03:54:46.279347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:54:47.045593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:47.045790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:54:47.045824 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:54:47.045841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:54:47.045877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:47.045893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:54:47.045903 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:54:47.045935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:54:47.045948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:47.045961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:54:47.045981 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:54:47.045995 | orchestrator | 2026-02-03 03:54:47.046008 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-03 03:54:47.046104 | orchestrator | Tuesday 03 February 2026 03:54:46 +0000 (0:00:00.629) 0:00:09.183 ****** 2026-02-03 03:54:47.046119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:54:47.046139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:47.046165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:54:49.990998 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:54:49.991132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:54:49.991163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:49.991220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:54:49.991240 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:54:49.991277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:54:49.991299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:49.991342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:54:49.991363 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:54:49.991381 | orchestrator | 2026-02-03 03:54:49.991401 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-03 03:54:49.991420 | orchestrator | Tuesday 03 February 2026 03:54:47 +0000 (0:00:00.763) 0:00:09.946 ****** 2026-02-03 03:54:49.991439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:49.991475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:49.991506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:49.991576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:54:54.539855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:54:54.539971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:54:54.539985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:54.539993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:54.540013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:54.540020 | orchestrator | 2026-02-03 03:54:54.540028 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-03 03:54:54.540035 | orchestrator | Tuesday 03 February 2026 03:54:49 +0000 (0:00:02.944) 0:00:12.891 ****** 2026-02-03 03:54:54.540060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:54.540075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:54.540083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:54.540095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:54:54.540101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:54.540114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:58.169265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:58.169355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:58.169371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:54:58.169381 | orchestrator | 2026-02-03 03:54:58.169392 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-03 03:54:58.169404 | orchestrator | Tuesday 03 February 2026 03:54:54 +0000 (0:00:04.543) 0:00:17.434 ****** 2026-02-03 03:54:58.169410 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:54:58.169418 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:54:58.169425 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:54:58.169432 | orchestrator | 2026-02-03 03:54:58.169440 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-03 03:54:58.169447 | orchestrator | Tuesday 03 February 2026 03:54:56 +0000 (0:00:01.520) 0:00:18.954 ****** 2026-02-03 03:54:58.169454 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:54:58.169460 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:54:58.169467 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:54:58.169474 | orchestrator | 2026-02-03 03:54:58.169481 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-03 03:54:58.169488 | orchestrator | Tuesday 03 February 2026 03:54:56 +0000 (0:00:00.577) 0:00:19.531 ****** 2026-02-03 03:54:58.169495 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:54:58.169596 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:54:58.169604 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:54:58.169612 | orchestrator | 2026-02-03 03:54:58.169619 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-03 03:54:58.169626 | orchestrator | Tuesday 03 February 2026 03:54:57 +0000 (0:00:00.581) 0:00:20.113 ****** 2026-02-03 03:54:58.169633 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:54:58.169641 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:54:58.169648 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:54:58.169656 | orchestrator | 2026-02-03 03:54:58.169663 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-03 03:54:58.169670 | orchestrator | Tuesday 03 February 2026 03:54:57 +0000 (0:00:00.323) 0:00:20.436 ****** 2026-02-03 03:54:58.169720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:54:58.169730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:58.169738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:54:58.169745 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:54:58.169753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:54:58.169766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:54:58.169782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:54:58.169790 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:54:58.169807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-03 03:55:18.048171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 03:55:18.048256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 03:55:18.048264 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:55:18.048270 | orchestrator | 2026-02-03 03:55:18.048276 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-03 03:55:18.048282 | orchestrator | Tuesday 03 February 2026 03:54:58 +0000 (0:00:00.629) 0:00:21.066 ****** 2026-02-03 03:55:18.048286 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:55:18.048290 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:55:18.048295 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:55:18.048299 | orchestrator | 2026-02-03 03:55:18.048303 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-03 03:55:18.048308 | orchestrator | Tuesday 03 February 2026 03:54:58 +0000 (0:00:00.317) 0:00:21.383 ****** 2026-02-03 03:55:18.048312 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-03 03:55:18.048335 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-03 03:55:18.048351 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-03 03:55:18.048356 | orchestrator | 2026-02-03 03:55:18.048360 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-03 03:55:18.048364 | orchestrator | Tuesday 03 February 2026 03:55:00 +0000 (0:00:01.892) 0:00:23.275 ****** 2026-02-03 03:55:18.048368 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 03:55:18.048373 | orchestrator | 2026-02-03 03:55:18.048377 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-03 03:55:18.048381 | orchestrator | Tuesday 03 February 2026 03:55:01 +0000 (0:00:00.979) 0:00:24.255 ****** 2026-02-03 03:55:18.048385 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:55:18.048390 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:55:18.048394 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:55:18.048398 | orchestrator | 2026-02-03 03:55:18.048402 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-03 03:55:18.048407 | orchestrator | Tuesday 03 February 2026 03:55:02 +0000 (0:00:00.726) 0:00:24.981 ****** 2026-02-03 03:55:18.048411 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-03 03:55:18.048415 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 03:55:18.048419 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-03 03:55:18.048423 | orchestrator | 2026-02-03 03:55:18.048428 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-03 03:55:18.048433 | orchestrator | Tuesday 03 February 2026 03:55:03 +0000 (0:00:01.182) 0:00:26.164 ****** 2026-02-03 03:55:18.048437 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:55:18.048442 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:55:18.048447 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:55:18.048451 | orchestrator | 2026-02-03 03:55:18.048455 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-03 03:55:18.048459 | orchestrator | Tuesday 03 February 2026 03:55:03 +0000 (0:00:00.581) 0:00:26.746 ****** 2026-02-03 03:55:18.048464 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-03 03:55:18.048468 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-03 03:55:18.048472 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-03 03:55:18.048477 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-03 03:55:18.048481 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-03 03:55:18.048485 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-03 03:55:18.048489 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-03 03:55:18.048494 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-03 03:55:18.048543 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-03 03:55:18.048548 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-03 03:55:18.048553 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-03 03:55:18.048557 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-03 03:55:18.048561 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-03 03:55:18.048566 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-03 03:55:18.048570 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-03 03:55:18.048578 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-03 03:55:18.048583 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-03 03:55:18.048587 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-03 03:55:18.048591 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-03 03:55:18.048595 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-03 03:55:18.048599 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-03 03:55:18.048604 | orchestrator | 2026-02-03 03:55:18.048608 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-03 03:55:18.048612 | orchestrator | Tuesday 03 February 2026 03:55:12 +0000 (0:00:09.064) 0:00:35.811 ****** 2026-02-03 03:55:18.048616 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-03 03:55:18.048620 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-03 03:55:18.048625 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-03 03:55:18.048629 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-03 03:55:18.048633 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-03 03:55:18.048637 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-03 03:55:18.048642 | orchestrator | 2026-02-03 03:55:18.048649 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-03 03:55:18.048654 | orchestrator | Tuesday 03 February 2026 03:55:15 +0000 (0:00:02.785) 0:00:38.597 ****** 2026-02-03 03:55:18.048660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:55:18.048669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:57:02.050450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-03 03:57:02.050657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:57:02.050698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:57:02.050710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-03 03:57:02.050722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:57:02.050754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:57:02.050790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-03 03:57:02.050802 | orchestrator | 2026-02-03 03:57:02.050814 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-03 03:57:02.050826 | orchestrator | Tuesday 03 February 2026 03:55:18 +0000 (0:00:02.349) 0:00:40.946 ****** 2026-02-03 03:57:02.050837 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:57:02.050848 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:57:02.050857 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:57:02.050867 | orchestrator | 2026-02-03 03:57:02.050877 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-03 03:57:02.050887 | orchestrator | Tuesday 03 February 2026 03:55:18 +0000 (0:00:00.555) 0:00:41.501 ****** 2026-02-03 03:57:02.050897 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:57:02.050907 | orchestrator | 2026-02-03 03:57:02.050917 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-03 03:57:02.050926 | orchestrator | Tuesday 03 February 2026 03:55:20 +0000 (0:00:02.283) 0:00:43.785 ****** 2026-02-03 03:57:02.050936 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:57:02.050946 | orchestrator | 2026-02-03 03:57:02.050956 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-03 03:57:02.050968 | orchestrator | Tuesday 03 February 2026 03:55:23 +0000 (0:00:02.205) 0:00:45.991 ****** 2026-02-03 03:57:02.050979 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:57:02.050991 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:57:02.051003 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:57:02.051041 | orchestrator | 2026-02-03 03:57:02.051055 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-03 03:57:02.051066 | orchestrator | Tuesday 03 February 2026 03:55:23 +0000 (0:00:00.867) 0:00:46.858 ****** 2026-02-03 03:57:02.051078 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:57:02.051090 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:57:02.051109 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:57:02.051121 | orchestrator | 2026-02-03 03:57:02.051133 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-03 03:57:02.051147 | orchestrator | Tuesday 03 February 2026 03:55:24 +0000 (0:00:00.331) 0:00:47.190 ****** 2026-02-03 03:57:02.051159 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:57:02.051171 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:57:02.051183 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:57:02.051194 | orchestrator | 2026-02-03 03:57:02.051207 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-03 03:57:02.051219 | orchestrator | Tuesday 03 February 2026 03:55:24 +0000 (0:00:00.393) 0:00:47.584 ****** 2026-02-03 03:57:02.051231 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:57:02.051243 | orchestrator | 2026-02-03 03:57:02.051253 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-03 03:57:02.051263 | orchestrator | Tuesday 03 February 2026 03:55:40 +0000 (0:00:15.335) 0:01:02.919 ****** 2026-02-03 03:57:02.051273 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:57:02.051282 | orchestrator | 2026-02-03 03:57:02.051292 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-03 03:57:02.051311 | orchestrator | Tuesday 03 February 2026 03:55:50 +0000 (0:00:10.608) 0:01:13.528 ****** 2026-02-03 03:57:02.051321 | orchestrator | 2026-02-03 03:57:02.051331 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-03 03:57:02.051341 | orchestrator | Tuesday 03 February 2026 03:55:50 +0000 (0:00:00.086) 0:01:13.614 ****** 2026-02-03 03:57:02.051351 | orchestrator | 2026-02-03 03:57:02.051361 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-03 03:57:02.051370 | orchestrator | Tuesday 03 February 2026 03:55:50 +0000 (0:00:00.085) 0:01:13.700 ****** 2026-02-03 03:57:02.051380 | orchestrator | 2026-02-03 03:57:02.051390 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-03 03:57:02.051400 | orchestrator | Tuesday 03 February 2026 03:55:50 +0000 (0:00:00.076) 0:01:13.776 ****** 2026-02-03 03:57:02.051409 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:57:02.051419 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:57:02.051429 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:57:02.051439 | orchestrator | 2026-02-03 03:57:02.051449 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-03 03:57:02.051459 | orchestrator | Tuesday 03 February 2026 03:56:43 +0000 (0:00:52.698) 0:02:06.475 ****** 2026-02-03 03:57:02.051469 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:57:02.051478 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:57:02.051488 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:57:02.051498 | orchestrator | 2026-02-03 03:57:02.051539 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-03 03:57:02.051555 | orchestrator | Tuesday 03 February 2026 03:56:53 +0000 (0:00:10.324) 0:02:16.800 ****** 2026-02-03 03:57:02.051569 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:57:02.051585 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:57:02.051600 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:57:02.051617 | orchestrator | 2026-02-03 03:57:02.051635 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-03 03:57:02.051651 | orchestrator | Tuesday 03 February 2026 03:57:01 +0000 (0:00:07.446) 0:02:24.246 ****** 2026-02-03 03:57:02.051679 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:57:53.878091 | orchestrator | 2026-02-03 03:57:53.878176 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-03 03:57:53.878184 | orchestrator | Tuesday 03 February 2026 03:57:02 +0000 (0:00:00.703) 0:02:24.949 ****** 2026-02-03 03:57:53.878190 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:57:53.878195 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:57:53.878200 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:57:53.878204 | orchestrator | 2026-02-03 03:57:53.878208 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-03 03:57:53.878213 | orchestrator | Tuesday 03 February 2026 03:57:02 +0000 (0:00:00.804) 0:02:25.754 ****** 2026-02-03 03:57:53.878217 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:57:53.878221 | orchestrator | 2026-02-03 03:57:53.878225 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-03 03:57:53.878229 | orchestrator | Tuesday 03 February 2026 03:57:05 +0000 (0:00:02.333) 0:02:28.088 ****** 2026-02-03 03:57:53.878233 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-03 03:57:53.878237 | orchestrator | 2026-02-03 03:57:53.878241 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-03 03:57:53.878245 | orchestrator | Tuesday 03 February 2026 03:57:17 +0000 (0:00:12.000) 0:02:40.088 ****** 2026-02-03 03:57:53.878249 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-03 03:57:53.878253 | orchestrator | 2026-02-03 03:57:53.878257 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-03 03:57:53.878261 | orchestrator | Tuesday 03 February 2026 03:57:42 +0000 (0:00:25.243) 0:03:05.332 ****** 2026-02-03 03:57:53.878283 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-03 03:57:53.878288 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-03 03:57:53.878292 | orchestrator | 2026-02-03 03:57:53.878296 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-03 03:57:53.878300 | orchestrator | Tuesday 03 February 2026 03:57:48 +0000 (0:00:06.350) 0:03:11.683 ****** 2026-02-03 03:57:53.878303 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:57:53.878307 | orchestrator | 2026-02-03 03:57:53.878311 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-03 03:57:53.878315 | orchestrator | Tuesday 03 February 2026 03:57:48 +0000 (0:00:00.146) 0:03:11.830 ****** 2026-02-03 03:57:53.878318 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:57:53.878322 | orchestrator | 2026-02-03 03:57:53.878326 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-03 03:57:53.878341 | orchestrator | Tuesday 03 February 2026 03:57:49 +0000 (0:00:00.141) 0:03:11.971 ****** 2026-02-03 03:57:53.878344 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:57:53.878348 | orchestrator | 2026-02-03 03:57:53.878352 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-03 03:57:53.878356 | orchestrator | Tuesday 03 February 2026 03:57:49 +0000 (0:00:00.132) 0:03:12.104 ****** 2026-02-03 03:57:53.878360 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:57:53.878364 | orchestrator | 2026-02-03 03:57:53.878367 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-03 03:57:53.878371 | orchestrator | Tuesday 03 February 2026 03:57:49 +0000 (0:00:00.370) 0:03:12.474 ****** 2026-02-03 03:57:53.878375 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:57:53.878379 | orchestrator | 2026-02-03 03:57:53.878383 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-03 03:57:53.878387 | orchestrator | Tuesday 03 February 2026 03:57:52 +0000 (0:00:03.181) 0:03:15.656 ****** 2026-02-03 03:57:53.878390 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:57:53.878394 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:57:53.878398 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:57:53.878402 | orchestrator | 2026-02-03 03:57:53.878405 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:57:53.878410 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 03:57:53.878416 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-03 03:57:53.878419 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-03 03:57:53.878423 | orchestrator | 2026-02-03 03:57:53.878428 | orchestrator | 2026-02-03 03:57:53.878432 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:57:53.878435 | orchestrator | Tuesday 03 February 2026 03:57:53 +0000 (0:00:00.719) 0:03:16.375 ****** 2026-02-03 03:57:53.878439 | orchestrator | =============================================================================== 2026-02-03 03:57:53.878443 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 52.70s 2026-02-03 03:57:53.878447 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.24s 2026-02-03 03:57:53.878451 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.34s 2026-02-03 03:57:53.878455 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.00s 2026-02-03 03:57:53.878458 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.61s 2026-02-03 03:57:53.878462 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.32s 2026-02-03 03:57:53.878466 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.06s 2026-02-03 03:57:53.878475 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.45s 2026-02-03 03:57:53.878479 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.35s 2026-02-03 03:57:53.878492 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.54s 2026-02-03 03:57:53.878496 | orchestrator | keystone : Creating default user role ----------------------------------- 3.18s 2026-02-03 03:57:53.878500 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.12s 2026-02-03 03:57:53.878544 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.94s 2026-02-03 03:57:53.878549 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.79s 2026-02-03 03:57:53.878553 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.35s 2026-02-03 03:57:53.878557 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.33s 2026-02-03 03:57:53.878561 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.28s 2026-02-03 03:57:53.878564 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.21s 2026-02-03 03:57:53.878568 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.89s 2026-02-03 03:57:53.878572 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.73s 2026-02-03 03:57:56.397915 | orchestrator | 2026-02-03 03:57:56 | INFO  | Task 6342d0d9-1de6-46a7-a00d-381d491732dc (placement) was prepared for execution. 2026-02-03 03:57:56.397993 | orchestrator | 2026-02-03 03:57:56 | INFO  | It takes a moment until task 6342d0d9-1de6-46a7-a00d-381d491732dc (placement) has been started and output is visible here. 2026-02-03 03:58:31.983969 | orchestrator | 2026-02-03 03:58:31.984076 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 03:58:31.984090 | orchestrator | 2026-02-03 03:58:31.984100 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 03:58:31.984110 | orchestrator | Tuesday 03 February 2026 03:58:00 +0000 (0:00:00.295) 0:00:00.295 ****** 2026-02-03 03:58:31.984119 | orchestrator | ok: [testbed-node-0] 2026-02-03 03:58:31.984130 | orchestrator | ok: [testbed-node-1] 2026-02-03 03:58:31.984139 | orchestrator | ok: [testbed-node-2] 2026-02-03 03:58:31.984148 | orchestrator | 2026-02-03 03:58:31.984157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 03:58:31.984166 | orchestrator | Tuesday 03 February 2026 03:58:01 +0000 (0:00:00.314) 0:00:00.609 ****** 2026-02-03 03:58:31.984176 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-03 03:58:31.984201 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-03 03:58:31.984210 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-03 03:58:31.984219 | orchestrator | 2026-02-03 03:58:31.984228 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-03 03:58:31.984237 | orchestrator | 2026-02-03 03:58:31.984245 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-03 03:58:31.984254 | orchestrator | Tuesday 03 February 2026 03:58:01 +0000 (0:00:00.481) 0:00:01.091 ****** 2026-02-03 03:58:31.984264 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:58:31.984274 | orchestrator | 2026-02-03 03:58:31.984283 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-03 03:58:31.984292 | orchestrator | Tuesday 03 February 2026 03:58:02 +0000 (0:00:00.581) 0:00:01.673 ****** 2026-02-03 03:58:31.984300 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-03 03:58:31.984309 | orchestrator | 2026-02-03 03:58:31.984318 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-03 03:58:31.984327 | orchestrator | Tuesday 03 February 2026 03:58:06 +0000 (0:00:03.945) 0:00:05.618 ****** 2026-02-03 03:58:31.984357 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-03 03:58:31.984366 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-03 03:58:31.984375 | orchestrator | 2026-02-03 03:58:31.984384 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-03 03:58:31.984393 | orchestrator | Tuesday 03 February 2026 03:58:12 +0000 (0:00:06.238) 0:00:11.856 ****** 2026-02-03 03:58:31.984402 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-03 03:58:31.984411 | orchestrator | 2026-02-03 03:58:31.984419 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-03 03:58:31.984428 | orchestrator | Tuesday 03 February 2026 03:58:16 +0000 (0:00:03.714) 0:00:15.571 ****** 2026-02-03 03:58:31.984437 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 03:58:31.984446 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-03 03:58:31.984454 | orchestrator | 2026-02-03 03:58:31.984463 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-03 03:58:31.984472 | orchestrator | Tuesday 03 February 2026 03:58:20 +0000 (0:00:04.152) 0:00:19.723 ****** 2026-02-03 03:58:31.984481 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 03:58:31.984489 | orchestrator | 2026-02-03 03:58:31.984498 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-03 03:58:31.984556 | orchestrator | Tuesday 03 February 2026 03:58:23 +0000 (0:00:03.109) 0:00:22.833 ****** 2026-02-03 03:58:31.984568 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-03 03:58:31.984578 | orchestrator | 2026-02-03 03:58:31.984588 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-03 03:58:31.984600 | orchestrator | Tuesday 03 February 2026 03:58:27 +0000 (0:00:04.115) 0:00:26.948 ****** 2026-02-03 03:58:31.984609 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:58:31.984619 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:58:31.984629 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:58:31.984639 | orchestrator | 2026-02-03 03:58:31.984650 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-03 03:58:31.984661 | orchestrator | Tuesday 03 February 2026 03:58:27 +0000 (0:00:00.295) 0:00:27.244 ****** 2026-02-03 03:58:31.984674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:31.984709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:31.984728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:31.984739 | orchestrator | 2026-02-03 03:58:31.984750 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-03 03:58:31.984780 | orchestrator | Tuesday 03 February 2026 03:58:28 +0000 (0:00:00.870) 0:00:28.115 ****** 2026-02-03 03:58:31.984790 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:58:31.984799 | orchestrator | 2026-02-03 03:58:31.984808 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-03 03:58:31.984817 | orchestrator | Tuesday 03 February 2026 03:58:29 +0000 (0:00:00.363) 0:00:28.478 ****** 2026-02-03 03:58:31.984825 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:58:31.984834 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:58:31.984843 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:58:31.984851 | orchestrator | 2026-02-03 03:58:31.984860 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-03 03:58:31.984869 | orchestrator | Tuesday 03 February 2026 03:58:29 +0000 (0:00:00.326) 0:00:28.805 ****** 2026-02-03 03:58:31.984878 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 03:58:31.984887 | orchestrator | 2026-02-03 03:58:31.984896 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-03 03:58:31.984905 | orchestrator | Tuesday 03 February 2026 03:58:30 +0000 (0:00:00.577) 0:00:29.382 ****** 2026-02-03 03:58:31.984914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:31.984932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:35.046497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:35.046668 | orchestrator | 2026-02-03 03:58:35.046687 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-03 03:58:35.046699 | orchestrator | Tuesday 03 February 2026 03:58:31 +0000 (0:00:01.910) 0:00:31.293 ****** 2026-02-03 03:58:35.046711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:58:35.046722 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:58:35.046733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:58:35.046744 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:58:35.046754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:58:35.046784 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:58:35.046795 | orchestrator | 2026-02-03 03:58:35.046806 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-03 03:58:35.046832 | orchestrator | Tuesday 03 February 2026 03:58:32 +0000 (0:00:00.579) 0:00:31.872 ****** 2026-02-03 03:58:35.046851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:58:35.046862 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:58:35.046872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:58:35.046883 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:58:35.046893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:58:35.046903 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:58:35.046913 | orchestrator | 2026-02-03 03:58:35.046922 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-03 03:58:35.046932 | orchestrator | Tuesday 03 February 2026 03:58:33 +0000 (0:00:00.764) 0:00:32.636 ****** 2026-02-03 03:58:35.046942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:35.046971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:42.420260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:42.420352 | orchestrator | 2026-02-03 03:58:42.420360 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-03 03:58:42.420366 | orchestrator | Tuesday 03 February 2026 03:58:35 +0000 (0:00:01.724) 0:00:34.361 ****** 2026-02-03 03:58:42.420371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:42.420376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:42.420404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:58:42.420409 | orchestrator | 2026-02-03 03:58:42.420413 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-03 03:58:42.420417 | orchestrator | Tuesday 03 February 2026 03:58:37 +0000 (0:00:02.436) 0:00:36.797 ****** 2026-02-03 03:58:42.420433 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-03 03:58:42.420439 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-03 03:58:42.420443 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-03 03:58:42.420447 | orchestrator | 2026-02-03 03:58:42.420451 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-03 03:58:42.420454 | orchestrator | Tuesday 03 February 2026 03:58:39 +0000 (0:00:01.587) 0:00:38.384 ****** 2026-02-03 03:58:42.420458 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:58:42.420463 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:58:42.420467 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:58:42.420471 | orchestrator | 2026-02-03 03:58:42.420475 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-03 03:58:42.420481 | orchestrator | Tuesday 03 February 2026 03:58:40 +0000 (0:00:01.400) 0:00:39.785 ****** 2026-02-03 03:58:42.420488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:58:42.420499 | orchestrator | skipping: [testbed-node-0] 2026-02-03 03:58:42.420506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:58:42.420513 | orchestrator | skipping: [testbed-node-1] 2026-02-03 03:58:42.420568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-03 03:58:42.420575 | orchestrator | skipping: [testbed-node-2] 2026-02-03 03:58:42.420582 | orchestrator | 2026-02-03 03:58:42.420592 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-03 03:58:42.420599 | orchestrator | Tuesday 03 February 2026 03:58:41 +0000 (0:00:00.824) 0:00:40.609 ****** 2026-02-03 03:58:42.420613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:59:12.080528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:59:12.080701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-03 03:59:12.080715 | orchestrator | 2026-02-03 03:59:12.080727 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-03 03:59:12.080738 | orchestrator | Tuesday 03 February 2026 03:58:42 +0000 (0:00:01.131) 0:00:41.741 ****** 2026-02-03 03:59:12.080747 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:59:12.080758 | orchestrator | 2026-02-03 03:59:12.080768 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-03 03:59:12.080776 | orchestrator | Tuesday 03 February 2026 03:58:44 +0000 (0:00:02.129) 0:00:43.870 ****** 2026-02-03 03:59:12.080785 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:59:12.080794 | orchestrator | 2026-02-03 03:59:12.080803 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-03 03:59:12.080812 | orchestrator | Tuesday 03 February 2026 03:58:46 +0000 (0:00:02.281) 0:00:46.152 ****** 2026-02-03 03:59:12.080821 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:59:12.080829 | orchestrator | 2026-02-03 03:59:12.080838 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-03 03:59:12.080848 | orchestrator | Tuesday 03 February 2026 03:59:00 +0000 (0:00:14.151) 0:01:00.304 ****** 2026-02-03 03:59:12.080857 | orchestrator | 2026-02-03 03:59:12.080865 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-03 03:59:12.080874 | orchestrator | Tuesday 03 February 2026 03:59:01 +0000 (0:00:00.074) 0:01:00.378 ****** 2026-02-03 03:59:12.080883 | orchestrator | 2026-02-03 03:59:12.080891 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-03 03:59:12.080900 | orchestrator | Tuesday 03 February 2026 03:59:01 +0000 (0:00:00.116) 0:01:00.495 ****** 2026-02-03 03:59:12.080908 | orchestrator | 2026-02-03 03:59:12.080917 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-03 03:59:12.080940 | orchestrator | Tuesday 03 February 2026 03:59:01 +0000 (0:00:00.084) 0:01:00.579 ****** 2026-02-03 03:59:12.080950 | orchestrator | changed: [testbed-node-1] 2026-02-03 03:59:12.080959 | orchestrator | changed: [testbed-node-0] 2026-02-03 03:59:12.080967 | orchestrator | changed: [testbed-node-2] 2026-02-03 03:59:12.080976 | orchestrator | 2026-02-03 03:59:12.080985 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 03:59:12.080995 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 03:59:12.081004 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 03:59:12.081012 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 03:59:12.081020 | orchestrator | 2026-02-03 03:59:12.081028 | orchestrator | 2026-02-03 03:59:12.081037 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 03:59:12.081054 | orchestrator | Tuesday 03 February 2026 03:59:11 +0000 (0:00:10.425) 0:01:11.004 ****** 2026-02-03 03:59:12.081062 | orchestrator | =============================================================================== 2026-02-03 03:59:12.081070 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.15s 2026-02-03 03:59:12.081093 | orchestrator | placement : Restart placement-api container ---------------------------- 10.43s 2026-02-03 03:59:12.081104 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.24s 2026-02-03 03:59:12.081113 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.15s 2026-02-03 03:59:12.081122 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.12s 2026-02-03 03:59:12.081131 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.95s 2026-02-03 03:59:12.081140 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.71s 2026-02-03 03:59:12.081149 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.11s 2026-02-03 03:59:12.081159 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.44s 2026-02-03 03:59:12.081167 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.28s 2026-02-03 03:59:12.081176 | orchestrator | placement : Creating placement databases -------------------------------- 2.13s 2026-02-03 03:59:12.081185 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.91s 2026-02-03 03:59:12.081194 | orchestrator | placement : Copying over config.json files for services ----------------- 1.72s 2026-02-03 03:59:12.081203 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.59s 2026-02-03 03:59:12.081212 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.40s 2026-02-03 03:59:12.081221 | orchestrator | placement : Check placement containers ---------------------------------- 1.13s 2026-02-03 03:59:12.081230 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.87s 2026-02-03 03:59:12.081239 | orchestrator | placement : Copying over existing policy file --------------------------- 0.82s 2026-02-03 03:59:12.081248 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.76s 2026-02-03 03:59:12.081257 | orchestrator | placement : include_tasks ----------------------------------------------- 0.58s 2026-02-03 03:59:14.562295 | orchestrator | 2026-02-03 03:59:14 | INFO  | Task 6c531d49-106a-4b57-936b-897937f7687b (neutron) was prepared for execution. 2026-02-03 03:59:14.562376 | orchestrator | 2026-02-03 03:59:14 | INFO  | It takes a moment until task 6c531d49-106a-4b57-936b-897937f7687b (neutron) has been started and output is visible here. 2026-02-03 04:00:02.626117 | orchestrator | 2026-02-03 04:00:02.626202 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:00:02.626211 | orchestrator | 2026-02-03 04:00:02.626217 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:00:02.626223 | orchestrator | Tuesday 03 February 2026 03:59:19 +0000 (0:00:00.286) 0:00:00.286 ****** 2026-02-03 04:00:02.626229 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:00:02.626236 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:00:02.626242 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:00:02.626247 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:00:02.626252 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:00:02.626257 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:00:02.626263 | orchestrator | 2026-02-03 04:00:02.626268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:00:02.626274 | orchestrator | Tuesday 03 February 2026 03:59:19 +0000 (0:00:00.764) 0:00:01.050 ****** 2026-02-03 04:00:02.626279 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-03 04:00:02.626286 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-03 04:00:02.626295 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-03 04:00:02.626325 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-03 04:00:02.626334 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-03 04:00:02.626342 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-03 04:00:02.626349 | orchestrator | 2026-02-03 04:00:02.626358 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-03 04:00:02.626366 | orchestrator | 2026-02-03 04:00:02.626373 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-03 04:00:02.626396 | orchestrator | Tuesday 03 February 2026 03:59:20 +0000 (0:00:00.687) 0:00:01.738 ****** 2026-02-03 04:00:02.626407 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:00:02.626461 | orchestrator | 2026-02-03 04:00:02.626471 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-03 04:00:02.626479 | orchestrator | Tuesday 03 February 2026 03:59:21 +0000 (0:00:01.323) 0:00:03.061 ****** 2026-02-03 04:00:02.626488 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:00:02.626497 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:00:02.626506 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:00:02.626514 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:00:02.626522 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:00:02.626531 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:00:02.626539 | orchestrator | 2026-02-03 04:00:02.626549 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-03 04:00:02.626555 | orchestrator | Tuesday 03 February 2026 03:59:23 +0000 (0:00:01.364) 0:00:04.426 ****** 2026-02-03 04:00:02.626560 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:00:02.626565 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:00:02.626570 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:00:02.626575 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:00:02.626580 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:00:02.626585 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:00:02.626631 | orchestrator | 2026-02-03 04:00:02.626639 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-03 04:00:02.626690 | orchestrator | Tuesday 03 February 2026 03:59:24 +0000 (0:00:01.090) 0:00:05.516 ****** 2026-02-03 04:00:02.626697 | orchestrator | ok: [testbed-node-0] => { 2026-02-03 04:00:02.626704 | orchestrator |  "changed": false, 2026-02-03 04:00:02.626711 | orchestrator |  "msg": "All assertions passed" 2026-02-03 04:00:02.626717 | orchestrator | } 2026-02-03 04:00:02.626723 | orchestrator | ok: [testbed-node-1] => { 2026-02-03 04:00:02.626730 | orchestrator |  "changed": false, 2026-02-03 04:00:02.626736 | orchestrator |  "msg": "All assertions passed" 2026-02-03 04:00:02.626743 | orchestrator | } 2026-02-03 04:00:02.626749 | orchestrator | ok: [testbed-node-2] => { 2026-02-03 04:00:02.626795 | orchestrator |  "changed": false, 2026-02-03 04:00:02.626819 | orchestrator |  "msg": "All assertions passed" 2026-02-03 04:00:02.626825 | orchestrator | } 2026-02-03 04:00:02.626831 | orchestrator | ok: [testbed-node-3] => { 2026-02-03 04:00:02.626838 | orchestrator |  "changed": false, 2026-02-03 04:00:02.626844 | orchestrator |  "msg": "All assertions passed" 2026-02-03 04:00:02.626850 | orchestrator | } 2026-02-03 04:00:02.626857 | orchestrator | ok: [testbed-node-4] => { 2026-02-03 04:00:02.626864 | orchestrator |  "changed": false, 2026-02-03 04:00:02.626870 | orchestrator |  "msg": "All assertions passed" 2026-02-03 04:00:02.626876 | orchestrator | } 2026-02-03 04:00:02.626883 | orchestrator | ok: [testbed-node-5] => { 2026-02-03 04:00:02.626889 | orchestrator |  "changed": false, 2026-02-03 04:00:02.626896 | orchestrator |  "msg": "All assertions passed" 2026-02-03 04:00:02.626902 | orchestrator | } 2026-02-03 04:00:02.626908 | orchestrator | 2026-02-03 04:00:02.626914 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-03 04:00:02.626938 | orchestrator | Tuesday 03 February 2026 03:59:25 +0000 (0:00:00.870) 0:00:06.387 ****** 2026-02-03 04:00:02.626945 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:02.626959 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:02.626964 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:02.626970 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:02.626975 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:02.626980 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:02.626985 | orchestrator | 2026-02-03 04:00:02.626991 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-03 04:00:02.626996 | orchestrator | Tuesday 03 February 2026 03:59:25 +0000 (0:00:00.651) 0:00:07.038 ****** 2026-02-03 04:00:02.627001 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-03 04:00:02.627007 | orchestrator | 2026-02-03 04:00:02.627012 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-03 04:00:02.627018 | orchestrator | Tuesday 03 February 2026 03:59:29 +0000 (0:00:03.928) 0:00:10.966 ****** 2026-02-03 04:00:02.627023 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-03 04:00:02.627030 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-03 04:00:02.627035 | orchestrator | 2026-02-03 04:00:02.627056 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-03 04:00:02.627061 | orchestrator | Tuesday 03 February 2026 03:59:36 +0000 (0:00:06.502) 0:00:17.469 ****** 2026-02-03 04:00:02.627067 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:00:02.627072 | orchestrator | 2026-02-03 04:00:02.627078 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-03 04:00:02.627083 | orchestrator | Tuesday 03 February 2026 03:59:39 +0000 (0:00:02.736) 0:00:20.206 ****** 2026-02-03 04:00:02.627088 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:00:02.627093 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-03 04:00:02.627099 | orchestrator | 2026-02-03 04:00:02.627104 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-03 04:00:02.627109 | orchestrator | Tuesday 03 February 2026 03:59:42 +0000 (0:00:03.374) 0:00:23.581 ****** 2026-02-03 04:00:02.627115 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:00:02.627120 | orchestrator | 2026-02-03 04:00:02.627125 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-03 04:00:02.627130 | orchestrator | Tuesday 03 February 2026 03:59:45 +0000 (0:00:03.065) 0:00:26.646 ****** 2026-02-03 04:00:02.627136 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-03 04:00:02.627141 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-03 04:00:02.627146 | orchestrator | 2026-02-03 04:00:02.627152 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-03 04:00:02.627157 | orchestrator | Tuesday 03 February 2026 03:59:53 +0000 (0:00:08.197) 0:00:34.844 ****** 2026-02-03 04:00:02.627163 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:02.627168 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:02.627179 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:02.627184 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:02.627190 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:02.627195 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:02.627200 | orchestrator | 2026-02-03 04:00:02.627205 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-03 04:00:02.627211 | orchestrator | Tuesday 03 February 2026 03:59:54 +0000 (0:00:00.812) 0:00:35.656 ****** 2026-02-03 04:00:02.627216 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:02.627221 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:02.627227 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:02.627232 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:02.627237 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:02.627242 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:02.627247 | orchestrator | 2026-02-03 04:00:02.627256 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-03 04:00:02.627262 | orchestrator | Tuesday 03 February 2026 03:59:56 +0000 (0:00:02.181) 0:00:37.838 ****** 2026-02-03 04:00:02.627268 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:00:02.627273 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:00:02.627278 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:00:02.627283 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:00:02.627289 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:00:02.627294 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:00:02.627299 | orchestrator | 2026-02-03 04:00:02.627304 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-03 04:00:02.627310 | orchestrator | Tuesday 03 February 2026 03:59:57 +0000 (0:00:01.212) 0:00:39.050 ****** 2026-02-03 04:00:02.627315 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:02.627320 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:02.627326 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:02.627331 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:02.627336 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:02.627341 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:02.627346 | orchestrator | 2026-02-03 04:00:02.627352 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-03 04:00:02.627357 | orchestrator | Tuesday 03 February 2026 04:00:00 +0000 (0:00:02.281) 0:00:41.332 ****** 2026-02-03 04:00:02.627365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:02.627380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:08.531288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:08.531424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:08.531443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:08.531454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:08.531464 | orchestrator | 2026-02-03 04:00:08.531475 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-03 04:00:08.531486 | orchestrator | Tuesday 03 February 2026 04:00:02 +0000 (0:00:02.475) 0:00:43.808 ****** 2026-02-03 04:00:08.531507 | orchestrator | [WARNING]: Skipped 2026-02-03 04:00:08.531518 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-03 04:00:08.531528 | orchestrator | due to this access issue: 2026-02-03 04:00:08.531539 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-03 04:00:08.531547 | orchestrator | a directory 2026-02-03 04:00:08.531556 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:00:08.531565 | orchestrator | 2026-02-03 04:00:08.531574 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-03 04:00:08.531582 | orchestrator | Tuesday 03 February 2026 04:00:03 +0000 (0:00:00.866) 0:00:44.674 ****** 2026-02-03 04:00:08.531649 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:00:08.531661 | orchestrator | 2026-02-03 04:00:08.531670 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-03 04:00:08.531701 | orchestrator | Tuesday 03 February 2026 04:00:04 +0000 (0:00:01.329) 0:00:46.004 ****** 2026-02-03 04:00:08.531722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:08.531744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:08.531753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:08.531763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:08.531783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:13.959448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:13.959555 | orchestrator | 2026-02-03 04:00:13.959588 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-03 04:00:13.959643 | orchestrator | Tuesday 03 February 2026 04:00:08 +0000 (0:00:03.706) 0:00:49.710 ****** 2026-02-03 04:00:13.959666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:13.959689 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:13.959711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:13.959731 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:13.959750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:13.959801 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:13.959846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:13.959868 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:13.959895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:13.959914 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:13.959932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:13.959952 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:13.959971 | orchestrator | 2026-02-03 04:00:13.959991 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-03 04:00:13.960012 | orchestrator | Tuesday 03 February 2026 04:00:10 +0000 (0:00:02.243) 0:00:51.954 ****** 2026-02-03 04:00:13.960033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:13.960053 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:13.960087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:19.841924 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:19.842061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:19.842076 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:19.842082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:19.842088 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:19.842092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:19.842096 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:19.842100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:19.842120 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:19.842125 | orchestrator | 2026-02-03 04:00:19.842130 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-03 04:00:19.842135 | orchestrator | Tuesday 03 February 2026 04:00:13 +0000 (0:00:03.187) 0:00:55.142 ****** 2026-02-03 04:00:19.842139 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:19.842143 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:19.842147 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:19.842150 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:19.842154 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:19.842158 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:19.842162 | orchestrator | 2026-02-03 04:00:19.842166 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-03 04:00:19.842169 | orchestrator | Tuesday 03 February 2026 04:00:16 +0000 (0:00:02.669) 0:00:57.811 ****** 2026-02-03 04:00:19.842173 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:19.842177 | orchestrator | 2026-02-03 04:00:19.842181 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-03 04:00:19.842195 | orchestrator | Tuesday 03 February 2026 04:00:16 +0000 (0:00:00.155) 0:00:57.966 ****** 2026-02-03 04:00:19.842200 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:19.842203 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:19.842207 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:19.842211 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:19.842215 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:19.842218 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:19.842222 | orchestrator | 2026-02-03 04:00:19.842226 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-03 04:00:19.842230 | orchestrator | Tuesday 03 February 2026 04:00:17 +0000 (0:00:00.692) 0:00:58.659 ****** 2026-02-03 04:00:19.842237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:19.842242 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:19.842246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:19.842253 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:19.842258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:19.842262 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:19.842266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:19.842270 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:19.842280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:29.192911 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:29.193025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:29.193046 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:29.193061 | orchestrator | 2026-02-03 04:00:29.193077 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-03 04:00:29.193092 | orchestrator | Tuesday 03 February 2026 04:00:19 +0000 (0:00:02.358) 0:01:01.017 ****** 2026-02-03 04:00:29.193106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:29.193150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:29.193165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:29.193214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:29.193229 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:29.193269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:29.193285 | orchestrator | 2026-02-03 04:00:29.193299 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-03 04:00:29.193312 | orchestrator | Tuesday 03 February 2026 04:00:22 +0000 (0:00:03.133) 0:01:04.151 ****** 2026-02-03 04:00:29.193326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:29.193340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:29.193372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:34.222150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:34.222282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:34.222298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:00:34.222310 | orchestrator | 2026-02-03 04:00:34.222323 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-03 04:00:34.222334 | orchestrator | Tuesday 03 February 2026 04:00:29 +0000 (0:00:06.223) 0:01:10.374 ****** 2026-02-03 04:00:34.222359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:34.222369 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:34.222399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:34.222419 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:34.222429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:34.222439 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:34.222448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:34.222458 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:34.222467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:34.222476 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:34.222491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:34.222501 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:34.222518 | orchestrator | 2026-02-03 04:00:34.222528 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-03 04:00:34.222537 | orchestrator | Tuesday 03 February 2026 04:00:31 +0000 (0:00:02.300) 0:01:12.675 ****** 2026-02-03 04:00:34.222547 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:34.222557 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:00:34.222567 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:34.222576 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:34.222586 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:00:34.222602 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:00:53.560524 | orchestrator | 2026-02-03 04:00:53.560607 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-03 04:00:53.560617 | orchestrator | Tuesday 03 February 2026 04:00:34 +0000 (0:00:02.726) 0:01:15.402 ****** 2026-02-03 04:00:53.560625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:53.560664 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:53.560674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:53.560682 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:53.560690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:53.560697 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:53.560720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:53.560770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:53.560778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:00:53.560786 | orchestrator | 2026-02-03 04:00:53.560793 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-03 04:00:53.560801 | orchestrator | Tuesday 03 February 2026 04:00:37 +0000 (0:00:03.362) 0:01:18.765 ****** 2026-02-03 04:00:53.560808 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:53.560815 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:53.560823 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:53.560831 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:53.560838 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:53.560845 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:53.560854 | orchestrator | 2026-02-03 04:00:53.560859 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-03 04:00:53.560864 | orchestrator | Tuesday 03 February 2026 04:00:40 +0000 (0:00:02.443) 0:01:21.208 ****** 2026-02-03 04:00:53.560868 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:53.560873 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:53.560877 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:53.560882 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:53.560887 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:53.560891 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:53.560896 | orchestrator | 2026-02-03 04:00:53.560900 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-03 04:00:53.560905 | orchestrator | Tuesday 03 February 2026 04:00:42 +0000 (0:00:02.201) 0:01:23.410 ****** 2026-02-03 04:00:53.560910 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:53.560914 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:53.560919 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:53.560923 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:53.560928 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:53.560938 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:53.560942 | orchestrator | 2026-02-03 04:00:53.560947 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-03 04:00:53.560952 | orchestrator | Tuesday 03 February 2026 04:00:44 +0000 (0:00:02.417) 0:01:25.827 ****** 2026-02-03 04:00:53.560956 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:53.560961 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:53.560965 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:53.560970 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:53.560974 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:53.560979 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:53.560983 | orchestrator | 2026-02-03 04:00:53.560988 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-03 04:00:53.560992 | orchestrator | Tuesday 03 February 2026 04:00:46 +0000 (0:00:02.161) 0:01:27.989 ****** 2026-02-03 04:00:53.560997 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:53.561001 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:53.561006 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:53.561010 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:53.561015 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:53.561019 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:53.561024 | orchestrator | 2026-02-03 04:00:53.561028 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-03 04:00:53.561033 | orchestrator | Tuesday 03 February 2026 04:00:49 +0000 (0:00:02.223) 0:01:30.213 ****** 2026-02-03 04:00:53.561037 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:53.561047 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:53.561055 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:53.561062 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:53.561069 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:53.561077 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:53.561084 | orchestrator | 2026-02-03 04:00:53.561092 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-03 04:00:53.561100 | orchestrator | Tuesday 03 February 2026 04:00:51 +0000 (0:00:02.180) 0:01:32.394 ****** 2026-02-03 04:00:53.561109 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-03 04:00:53.561118 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:53.561126 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-03 04:00:53.561134 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:53.561140 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-03 04:00:53.561151 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:57.842795 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-03 04:00:57.842907 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:57.842925 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-03 04:00:57.842937 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:57.842950 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-03 04:00:57.842961 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:57.842973 | orchestrator | 2026-02-03 04:00:57.842987 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-03 04:00:57.842998 | orchestrator | Tuesday 03 February 2026 04:00:53 +0000 (0:00:02.345) 0:01:34.739 ****** 2026-02-03 04:00:57.843013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:57.843051 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:57.843064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:57.843076 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:00:57.843104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:57.843117 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:57.843148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:57.843161 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:00:57.843173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:57.843193 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:00:57.843205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:00:57.843216 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:00:57.843227 | orchestrator | 2026-02-03 04:00:57.843238 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-03 04:00:57.843250 | orchestrator | Tuesday 03 February 2026 04:00:55 +0000 (0:00:02.044) 0:01:36.784 ****** 2026-02-03 04:00:57.843261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:57.843272 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:00:57.843289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:00:57.843301 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:00:57.843321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:01:24.536410 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:01:24.536504 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.536509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:01:24.536513 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.536517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:01:24.536521 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.536525 | orchestrator | 2026-02-03 04:01:24.536531 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-03 04:01:24.536536 | orchestrator | Tuesday 03 February 2026 04:00:57 +0000 (0:00:02.243) 0:01:39.027 ****** 2026-02-03 04:01:24.536540 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536543 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.536547 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536551 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.536565 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.536569 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.536573 | orchestrator | 2026-02-03 04:01:24.536577 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-03 04:01:24.536581 | orchestrator | Tuesday 03 February 2026 04:01:00 +0000 (0:00:02.235) 0:01:41.263 ****** 2026-02-03 04:01:24.536584 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.536588 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536592 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536596 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:01:24.536599 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:01:24.536603 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:01:24.536624 | orchestrator | 2026-02-03 04:01:24.536628 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-03 04:01:24.536632 | orchestrator | Tuesday 03 February 2026 04:01:03 +0000 (0:00:03.783) 0:01:45.046 ****** 2026-02-03 04:01:24.536635 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.536639 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536643 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536646 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.536650 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.536654 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.536700 | orchestrator | 2026-02-03 04:01:24.536704 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-03 04:01:24.536708 | orchestrator | Tuesday 03 February 2026 04:01:06 +0000 (0:00:02.336) 0:01:47.383 ****** 2026-02-03 04:01:24.536712 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536716 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.536719 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536723 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.536727 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.536730 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.536734 | orchestrator | 2026-02-03 04:01:24.536738 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-03 04:01:24.536751 | orchestrator | Tuesday 03 February 2026 04:01:08 +0000 (0:00:02.201) 0:01:49.585 ****** 2026-02-03 04:01:24.536755 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.536759 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536763 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536767 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.536770 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.536774 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.536778 | orchestrator | 2026-02-03 04:01:24.536782 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-03 04:01:24.536785 | orchestrator | Tuesday 03 February 2026 04:01:10 +0000 (0:00:02.203) 0:01:51.788 ****** 2026-02-03 04:01:24.536789 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.536793 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536797 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536800 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.536804 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.536808 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.536818 | orchestrator | 2026-02-03 04:01:24.536822 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-03 04:01:24.536826 | orchestrator | Tuesday 03 February 2026 04:01:13 +0000 (0:00:02.456) 0:01:54.245 ****** 2026-02-03 04:01:24.536834 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.536839 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536843 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536846 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.536850 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.536854 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.536858 | orchestrator | 2026-02-03 04:01:24.536862 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-03 04:01:24.536865 | orchestrator | Tuesday 03 February 2026 04:01:15 +0000 (0:00:02.265) 0:01:56.511 ****** 2026-02-03 04:01:24.536869 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536875 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.536881 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536887 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.536892 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.536898 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.536904 | orchestrator | 2026-02-03 04:01:24.536910 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-03 04:01:24.536921 | orchestrator | Tuesday 03 February 2026 04:01:17 +0000 (0:00:02.209) 0:01:58.720 ****** 2026-02-03 04:01:24.536927 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.536933 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536939 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.536944 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.536950 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.536955 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.536961 | orchestrator | 2026-02-03 04:01:24.536967 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-03 04:01:24.536973 | orchestrator | Tuesday 03 February 2026 04:01:19 +0000 (0:00:02.471) 0:02:01.192 ****** 2026-02-03 04:01:24.536979 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-03 04:01:24.536987 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.536993 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-03 04:01:24.536999 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:24.537006 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-03 04:01:24.537012 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:24.537018 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-03 04:01:24.537025 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:24.537031 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-03 04:01:24.537037 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:24.537049 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-03 04:01:24.537055 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:24.537061 | orchestrator | 2026-02-03 04:01:24.537068 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-03 04:01:24.537073 | orchestrator | Tuesday 03 February 2026 04:01:22 +0000 (0:00:02.266) 0:02:03.458 ****** 2026-02-03 04:01:24.537081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:01:24.537089 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:01:24.537104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:01:27.425422 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:01:27.425513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-03 04:01:27.425525 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:01:27.425533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:01:27.425539 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:01:27.425556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:01:27.425561 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:01:27.425565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 04:01:27.425569 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:01:27.425573 | orchestrator | 2026-02-03 04:01:27.425578 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-03 04:01:27.425583 | orchestrator | Tuesday 03 February 2026 04:01:24 +0000 (0:00:02.263) 0:02:05.722 ****** 2026-02-03 04:01:27.425599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:01:27.425620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:01:27.425627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-03 04:01:27.425631 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:01:27.425636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:01:27.425647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-03 04:03:45.716927 | orchestrator | 2026-02-03 04:03:45.717009 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-03 04:03:45.717017 | orchestrator | Tuesday 03 February 2026 04:01:27 +0000 (0:00:02.893) 0:02:08.615 ****** 2026-02-03 04:03:45.717022 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:03:45.717027 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:03:45.717031 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:03:45.717036 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:03:45.717040 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:03:45.717044 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:03:45.717048 | orchestrator | 2026-02-03 04:03:45.717052 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-03 04:03:45.717056 | orchestrator | Tuesday 03 February 2026 04:01:28 +0000 (0:00:00.600) 0:02:09.216 ****** 2026-02-03 04:03:45.717060 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:03:45.717064 | orchestrator | 2026-02-03 04:03:45.717068 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-03 04:03:45.717072 | orchestrator | Tuesday 03 February 2026 04:01:30 +0000 (0:00:02.615) 0:02:11.832 ****** 2026-02-03 04:03:45.717075 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:03:45.717079 | orchestrator | 2026-02-03 04:03:45.717083 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-03 04:03:45.717087 | orchestrator | Tuesday 03 February 2026 04:01:32 +0000 (0:00:02.322) 0:02:14.154 ****** 2026-02-03 04:03:45.717091 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:03:45.717095 | orchestrator | 2026-02-03 04:03:45.717099 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-03 04:03:45.717103 | orchestrator | Tuesday 03 February 2026 04:02:14 +0000 (0:00:41.816) 0:02:55.971 ****** 2026-02-03 04:03:45.717107 | orchestrator | 2026-02-03 04:03:45.717111 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-03 04:03:45.717115 | orchestrator | Tuesday 03 February 2026 04:02:14 +0000 (0:00:00.075) 0:02:56.047 ****** 2026-02-03 04:03:45.717118 | orchestrator | 2026-02-03 04:03:45.717122 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-03 04:03:45.717126 | orchestrator | Tuesday 03 February 2026 04:02:14 +0000 (0:00:00.072) 0:02:56.119 ****** 2026-02-03 04:03:45.717130 | orchestrator | 2026-02-03 04:03:45.717134 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-03 04:03:45.717150 | orchestrator | Tuesday 03 February 2026 04:02:14 +0000 (0:00:00.070) 0:02:56.190 ****** 2026-02-03 04:03:45.717154 | orchestrator | 2026-02-03 04:03:45.717158 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-03 04:03:45.717162 | orchestrator | Tuesday 03 February 2026 04:02:15 +0000 (0:00:00.087) 0:02:56.278 ****** 2026-02-03 04:03:45.717166 | orchestrator | 2026-02-03 04:03:45.717170 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-03 04:03:45.717173 | orchestrator | Tuesday 03 February 2026 04:02:15 +0000 (0:00:00.073) 0:02:56.351 ****** 2026-02-03 04:03:45.717177 | orchestrator | 2026-02-03 04:03:45.717197 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-03 04:03:45.717202 | orchestrator | Tuesday 03 February 2026 04:02:15 +0000 (0:00:00.076) 0:02:56.427 ****** 2026-02-03 04:03:45.717206 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:03:45.717209 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:03:45.717213 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:03:45.717217 | orchestrator | 2026-02-03 04:03:45.717221 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-03 04:03:45.717225 | orchestrator | Tuesday 03 February 2026 04:02:40 +0000 (0:00:25.333) 0:03:21.761 ****** 2026-02-03 04:03:45.717228 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:03:45.717233 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:03:45.717240 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:03:45.717246 | orchestrator | 2026-02-03 04:03:45.717253 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:03:45.717260 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-03 04:03:45.717269 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-03 04:03:45.717275 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-03 04:03:45.717281 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-03 04:03:45.717287 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-03 04:03:45.717294 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-03 04:03:45.717299 | orchestrator | 2026-02-03 04:03:45.717305 | orchestrator | 2026-02-03 04:03:45.717311 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:03:45.717317 | orchestrator | Tuesday 03 February 2026 04:03:45 +0000 (0:01:04.595) 0:04:26.356 ****** 2026-02-03 04:03:45.717323 | orchestrator | =============================================================================== 2026-02-03 04:03:45.717329 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 64.60s 2026-02-03 04:03:45.717336 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.82s 2026-02-03 04:03:45.717341 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.33s 2026-02-03 04:03:45.717362 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.20s 2026-02-03 04:03:45.717369 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.50s 2026-02-03 04:03:45.717376 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.22s 2026-02-03 04:03:45.717382 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.93s 2026-02-03 04:03:45.717389 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.78s 2026-02-03 04:03:45.717395 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.71s 2026-02-03 04:03:45.717398 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.37s 2026-02-03 04:03:45.717402 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.36s 2026-02-03 04:03:45.717406 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.19s 2026-02-03 04:03:45.717410 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.13s 2026-02-03 04:03:45.717414 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.07s 2026-02-03 04:03:45.717417 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.89s 2026-02-03 04:03:45.717426 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 2.74s 2026-02-03 04:03:45.717430 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.73s 2026-02-03 04:03:45.717434 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 2.67s 2026-02-03 04:03:45.717438 | orchestrator | neutron : Creating Neutron database ------------------------------------- 2.62s 2026-02-03 04:03:45.717442 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.48s 2026-02-03 04:03:48.308110 | orchestrator | 2026-02-03 04:03:48 | INFO  | Task 7fce54bc-025f-412b-a94d-ffcef27f24a1 (nova) was prepared for execution. 2026-02-03 04:03:48.308224 | orchestrator | 2026-02-03 04:03:48 | INFO  | It takes a moment until task 7fce54bc-025f-412b-a94d-ffcef27f24a1 (nova) has been started and output is visible here. 2026-02-03 04:05:47.113077 | orchestrator | 2026-02-03 04:05:47.113186 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:05:47.113198 | orchestrator | 2026-02-03 04:05:47.113208 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-03 04:05:47.113217 | orchestrator | Tuesday 03 February 2026 04:03:52 +0000 (0:00:00.282) 0:00:00.282 ****** 2026-02-03 04:05:47.113226 | orchestrator | changed: [testbed-manager] 2026-02-03 04:05:47.113236 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.113245 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:05:47.113253 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:05:47.113262 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:05:47.113270 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:05:47.113279 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:05:47.113287 | orchestrator | 2026-02-03 04:05:47.113295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:05:47.113304 | orchestrator | Tuesday 03 February 2026 04:03:53 +0000 (0:00:00.883) 0:00:01.166 ****** 2026-02-03 04:05:47.113312 | orchestrator | changed: [testbed-manager] 2026-02-03 04:05:47.113320 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.113328 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:05:47.113336 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:05:47.113344 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:05:47.113352 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:05:47.113361 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:05:47.113368 | orchestrator | 2026-02-03 04:05:47.113375 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:05:47.113383 | orchestrator | Tuesday 03 February 2026 04:03:54 +0000 (0:00:00.936) 0:00:02.103 ****** 2026-02-03 04:05:47.113391 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-03 04:05:47.113399 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-03 04:05:47.113406 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-03 04:05:47.113414 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-03 04:05:47.113422 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-03 04:05:47.113430 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-03 04:05:47.113437 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-03 04:05:47.113445 | orchestrator | 2026-02-03 04:05:47.113453 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-03 04:05:47.113460 | orchestrator | 2026-02-03 04:05:47.113468 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-03 04:05:47.113476 | orchestrator | Tuesday 03 February 2026 04:03:55 +0000 (0:00:00.860) 0:00:02.964 ****** 2026-02-03 04:05:47.113484 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:05:47.113492 | orchestrator | 2026-02-03 04:05:47.113500 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-03 04:05:47.113532 | orchestrator | Tuesday 03 February 2026 04:03:56 +0000 (0:00:00.833) 0:00:03.798 ****** 2026-02-03 04:05:47.113540 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-03 04:05:47.113548 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-03 04:05:47.113556 | orchestrator | 2026-02-03 04:05:47.113564 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-03 04:05:47.113571 | orchestrator | Tuesday 03 February 2026 04:04:00 +0000 (0:00:04.420) 0:00:08.219 ****** 2026-02-03 04:05:47.113579 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-03 04:05:47.113586 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-03 04:05:47.113593 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.113600 | orchestrator | 2026-02-03 04:05:47.113607 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-03 04:05:47.113614 | orchestrator | Tuesday 03 February 2026 04:04:04 +0000 (0:00:04.202) 0:00:12.421 ****** 2026-02-03 04:05:47.113622 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.113631 | orchestrator | 2026-02-03 04:05:47.113639 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-03 04:05:47.113647 | orchestrator | Tuesday 03 February 2026 04:04:05 +0000 (0:00:00.669) 0:00:13.091 ****** 2026-02-03 04:05:47.113654 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.113662 | orchestrator | 2026-02-03 04:05:47.113671 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-03 04:05:47.113679 | orchestrator | Tuesday 03 February 2026 04:04:06 +0000 (0:00:01.316) 0:00:14.407 ****** 2026-02-03 04:05:47.113687 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.113695 | orchestrator | 2026-02-03 04:05:47.113704 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-03 04:05:47.113712 | orchestrator | Tuesday 03 February 2026 04:04:09 +0000 (0:00:02.683) 0:00:17.091 ****** 2026-02-03 04:05:47.113721 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:05:47.113730 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.113739 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.113748 | orchestrator | 2026-02-03 04:05:47.113757 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-03 04:05:47.113766 | orchestrator | Tuesday 03 February 2026 04:04:09 +0000 (0:00:00.302) 0:00:17.393 ****** 2026-02-03 04:05:47.113775 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:05:47.113784 | orchestrator | 2026-02-03 04:05:47.113792 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-03 04:05:47.113801 | orchestrator | Tuesday 03 February 2026 04:04:41 +0000 (0:00:31.668) 0:00:49.062 ****** 2026-02-03 04:05:47.113808 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.113815 | orchestrator | 2026-02-03 04:05:47.113822 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-03 04:05:47.113858 | orchestrator | Tuesday 03 February 2026 04:04:56 +0000 (0:00:14.874) 0:01:03.936 ****** 2026-02-03 04:05:47.113869 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:05:47.113876 | orchestrator | 2026-02-03 04:05:47.113884 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-03 04:05:47.113907 | orchestrator | Tuesday 03 February 2026 04:05:08 +0000 (0:00:12.132) 0:01:16.069 ****** 2026-02-03 04:05:47.113936 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:05:47.113945 | orchestrator | 2026-02-03 04:05:47.113952 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-03 04:05:47.113960 | orchestrator | Tuesday 03 February 2026 04:05:09 +0000 (0:00:00.746) 0:01:16.816 ****** 2026-02-03 04:05:47.113968 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:05:47.113976 | orchestrator | 2026-02-03 04:05:47.113984 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-03 04:05:47.113992 | orchestrator | Tuesday 03 February 2026 04:05:09 +0000 (0:00:00.502) 0:01:17.318 ****** 2026-02-03 04:05:47.114000 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:05:47.114072 | orchestrator | 2026-02-03 04:05:47.114083 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-03 04:05:47.114091 | orchestrator | Tuesday 03 February 2026 04:05:10 +0000 (0:00:00.829) 0:01:18.148 ****** 2026-02-03 04:05:47.114099 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:05:47.114107 | orchestrator | 2026-02-03 04:05:47.114115 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-03 04:05:47.114124 | orchestrator | Tuesday 03 February 2026 04:05:28 +0000 (0:00:17.850) 0:01:35.998 ****** 2026-02-03 04:05:47.114132 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:05:47.114140 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114176 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114185 | orchestrator | 2026-02-03 04:05:47.114193 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-03 04:05:47.114201 | orchestrator | 2026-02-03 04:05:47.114210 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-03 04:05:47.114219 | orchestrator | Tuesday 03 February 2026 04:05:28 +0000 (0:00:00.359) 0:01:36.358 ****** 2026-02-03 04:05:47.114227 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:05:47.114235 | orchestrator | 2026-02-03 04:05:47.114244 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-03 04:05:47.114252 | orchestrator | Tuesday 03 February 2026 04:05:29 +0000 (0:00:00.866) 0:01:37.224 ****** 2026-02-03 04:05:47.114261 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114269 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114277 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.114285 | orchestrator | 2026-02-03 04:05:47.114294 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-03 04:05:47.114302 | orchestrator | Tuesday 03 February 2026 04:05:31 +0000 (0:00:02.038) 0:01:39.263 ****** 2026-02-03 04:05:47.114310 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114318 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114327 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.114335 | orchestrator | 2026-02-03 04:05:47.114343 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-03 04:05:47.114350 | orchestrator | Tuesday 03 February 2026 04:05:33 +0000 (0:00:02.098) 0:01:41.361 ****** 2026-02-03 04:05:47.114359 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:05:47.114367 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114375 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114383 | orchestrator | 2026-02-03 04:05:47.114391 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-03 04:05:47.114399 | orchestrator | Tuesday 03 February 2026 04:05:34 +0000 (0:00:00.339) 0:01:41.701 ****** 2026-02-03 04:05:47.114407 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-03 04:05:47.114414 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114422 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-03 04:05:47.114430 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114437 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-03 04:05:47.114445 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-03 04:05:47.114453 | orchestrator | 2026-02-03 04:05:47.114460 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-03 04:05:47.114468 | orchestrator | Tuesday 03 February 2026 04:05:41 +0000 (0:00:07.916) 0:01:49.618 ****** 2026-02-03 04:05:47.114476 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:05:47.114484 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114491 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114500 | orchestrator | 2026-02-03 04:05:47.114507 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-03 04:05:47.114516 | orchestrator | Tuesday 03 February 2026 04:05:42 +0000 (0:00:00.367) 0:01:49.985 ****** 2026-02-03 04:05:47.114524 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-03 04:05:47.114541 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:05:47.114549 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-03 04:05:47.114557 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114565 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-03 04:05:47.114572 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114580 | orchestrator | 2026-02-03 04:05:47.114588 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-03 04:05:47.114596 | orchestrator | Tuesday 03 February 2026 04:05:43 +0000 (0:00:00.910) 0:01:50.896 ****** 2026-02-03 04:05:47.114604 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114611 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114619 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.114626 | orchestrator | 2026-02-03 04:05:47.114635 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-03 04:05:47.114642 | orchestrator | Tuesday 03 February 2026 04:05:43 +0000 (0:00:00.491) 0:01:51.387 ****** 2026-02-03 04:05:47.114650 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114659 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114666 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:05:47.114674 | orchestrator | 2026-02-03 04:05:47.114682 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-03 04:05:47.114689 | orchestrator | Tuesday 03 February 2026 04:05:44 +0000 (0:00:01.016) 0:01:52.403 ****** 2026-02-03 04:05:47.114697 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:05:47.114705 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:05:47.114725 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:07:04.785236 | orchestrator | 2026-02-03 04:07:04.785356 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-03 04:07:04.785375 | orchestrator | Tuesday 03 February 2026 04:05:47 +0000 (0:00:02.350) 0:01:54.754 ****** 2026-02-03 04:07:04.785387 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:04.785399 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:04.785410 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:07:04.785422 | orchestrator | 2026-02-03 04:07:04.785434 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-03 04:07:04.785446 | orchestrator | Tuesday 03 February 2026 04:06:08 +0000 (0:00:20.954) 0:02:15.708 ****** 2026-02-03 04:07:04.785457 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:04.785468 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:04.785479 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:07:04.785490 | orchestrator | 2026-02-03 04:07:04.785501 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-03 04:07:04.785513 | orchestrator | Tuesday 03 February 2026 04:06:20 +0000 (0:00:12.253) 0:02:27.961 ****** 2026-02-03 04:07:04.785524 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:07:04.785535 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:04.785546 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:04.785557 | orchestrator | 2026-02-03 04:07:04.785568 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-03 04:07:04.785579 | orchestrator | Tuesday 03 February 2026 04:06:21 +0000 (0:00:00.880) 0:02:28.842 ****** 2026-02-03 04:07:04.785591 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:04.785602 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:04.785613 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:07:04.785624 | orchestrator | 2026-02-03 04:07:04.785636 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-03 04:07:04.785647 | orchestrator | Tuesday 03 February 2026 04:06:34 +0000 (0:00:12.838) 0:02:41.681 ****** 2026-02-03 04:07:04.785658 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:07:04.785669 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:04.785680 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:04.785691 | orchestrator | 2026-02-03 04:07:04.785702 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-03 04:07:04.785740 | orchestrator | Tuesday 03 February 2026 04:06:35 +0000 (0:00:01.099) 0:02:42.781 ****** 2026-02-03 04:07:04.785752 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:07:04.785763 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:04.785778 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:04.785792 | orchestrator | 2026-02-03 04:07:04.785805 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-03 04:07:04.785818 | orchestrator | 2026-02-03 04:07:04.785831 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-03 04:07:04.785845 | orchestrator | Tuesday 03 February 2026 04:06:35 +0000 (0:00:00.344) 0:02:43.126 ****** 2026-02-03 04:07:04.785858 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:07:04.785903 | orchestrator | 2026-02-03 04:07:04.785919 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-03 04:07:04.785932 | orchestrator | Tuesday 03 February 2026 04:06:36 +0000 (0:00:00.834) 0:02:43.960 ****** 2026-02-03 04:07:04.785944 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-03 04:07:04.785955 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-03 04:07:04.785966 | orchestrator | 2026-02-03 04:07:04.785977 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-03 04:07:04.785988 | orchestrator | Tuesday 03 February 2026 04:06:39 +0000 (0:00:03.244) 0:02:47.205 ****** 2026-02-03 04:07:04.786000 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-03 04:07:04.786194 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-03 04:07:04.786220 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-03 04:07:04.786232 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-03 04:07:04.786243 | orchestrator | 2026-02-03 04:07:04.786254 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-03 04:07:04.786265 | orchestrator | Tuesday 03 February 2026 04:06:45 +0000 (0:00:06.405) 0:02:53.610 ****** 2026-02-03 04:07:04.786277 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:07:04.786288 | orchestrator | 2026-02-03 04:07:04.786299 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-03 04:07:04.786310 | orchestrator | Tuesday 03 February 2026 04:06:49 +0000 (0:00:03.122) 0:02:56.733 ****** 2026-02-03 04:07:04.786321 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:07:04.786332 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-03 04:07:04.786343 | orchestrator | 2026-02-03 04:07:04.786354 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-03 04:07:04.786365 | orchestrator | Tuesday 03 February 2026 04:06:52 +0000 (0:00:03.853) 0:03:00.587 ****** 2026-02-03 04:07:04.786376 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:07:04.786387 | orchestrator | 2026-02-03 04:07:04.786398 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-03 04:07:04.786409 | orchestrator | Tuesday 03 February 2026 04:06:56 +0000 (0:00:03.213) 0:03:03.800 ****** 2026-02-03 04:07:04.786420 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-03 04:07:04.786431 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-03 04:07:04.786442 | orchestrator | 2026-02-03 04:07:04.786458 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-03 04:07:04.786489 | orchestrator | Tuesday 03 February 2026 04:07:03 +0000 (0:00:07.260) 0:03:11.060 ****** 2026-02-03 04:07:04.786507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:04.786539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:04.786553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:04.786579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:09.547775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:09.547859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:09.547869 | orchestrator | 2026-02-03 04:07:09.547921 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-03 04:07:09.547928 | orchestrator | Tuesday 03 February 2026 04:07:04 +0000 (0:00:01.364) 0:03:12.425 ****** 2026-02-03 04:07:09.547933 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:07:09.547939 | orchestrator | 2026-02-03 04:07:09.547945 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-03 04:07:09.547950 | orchestrator | Tuesday 03 February 2026 04:07:04 +0000 (0:00:00.164) 0:03:12.590 ****** 2026-02-03 04:07:09.547954 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:07:09.547960 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:09.547964 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:09.547969 | orchestrator | 2026-02-03 04:07:09.547974 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-03 04:07:09.547979 | orchestrator | Tuesday 03 February 2026 04:07:05 +0000 (0:00:00.352) 0:03:12.943 ****** 2026-02-03 04:07:09.547983 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:07:09.547988 | orchestrator | 2026-02-03 04:07:09.547993 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-03 04:07:09.547998 | orchestrator | Tuesday 03 February 2026 04:07:05 +0000 (0:00:00.698) 0:03:13.641 ****** 2026-02-03 04:07:09.548002 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:07:09.548007 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:09.548012 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:09.548016 | orchestrator | 2026-02-03 04:07:09.548021 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-03 04:07:09.548026 | orchestrator | Tuesday 03 February 2026 04:07:06 +0000 (0:00:00.565) 0:03:14.206 ****** 2026-02-03 04:07:09.548031 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:07:09.548037 | orchestrator | 2026-02-03 04:07:09.548042 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-03 04:07:09.548046 | orchestrator | Tuesday 03 February 2026 04:07:07 +0000 (0:00:00.596) 0:03:14.802 ****** 2026-02-03 04:07:09.548065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:09.548099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:09.548106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:09.548111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:09.548117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:09.548128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:09.548133 | orchestrator | 2026-02-03 04:07:09.548141 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-03 04:07:11.572441 | orchestrator | Tuesday 03 February 2026 04:07:09 +0000 (0:00:02.380) 0:03:17.183 ****** 2026-02-03 04:07:11.572544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 04:07:11.572563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:07:11.572575 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:07:11.572587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 04:07:11.572632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:07:11.572643 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:11.572672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 04:07:11.572684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:07:11.572693 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:11.572702 | orchestrator | 2026-02-03 04:07:11.572713 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-03 04:07:11.572723 | orchestrator | Tuesday 03 February 2026 04:07:10 +0000 (0:00:01.113) 0:03:18.297 ****** 2026-02-03 04:07:11.572733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 04:07:11.572750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:07:11.572760 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:07:11.572783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 04:07:14.091779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:07:14.091860 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:14.091871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 04:07:14.091942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:07:14.091951 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:14.091957 | orchestrator | 2026-02-03 04:07:14.091964 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-03 04:07:14.091971 | orchestrator | Tuesday 03 February 2026 04:07:11 +0000 (0:00:00.920) 0:03:19.217 ****** 2026-02-03 04:07:14.091988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:14.092008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:14.092016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:14.092031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:14.092038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:14.092049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:20.981226 | orchestrator | 2026-02-03 04:07:20.981325 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-03 04:07:20.981340 | orchestrator | Tuesday 03 February 2026 04:07:14 +0000 (0:00:02.514) 0:03:21.732 ****** 2026-02-03 04:07:20.981355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:20.981392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:20.981415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:20.981441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:20.981451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:20.981465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:20.981472 | orchestrator | 2026-02-03 04:07:20.981479 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-03 04:07:20.981486 | orchestrator | Tuesday 03 February 2026 04:07:20 +0000 (0:00:06.263) 0:03:27.995 ****** 2026-02-03 04:07:20.981499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 04:07:20.981507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:07:20.981515 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:07:20.981531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 04:07:25.814665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:07:25.814751 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:25.814765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-03 04:07:25.814791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:07:25.814800 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:25.814808 | orchestrator | 2026-02-03 04:07:25.814817 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-03 04:07:25.814826 | orchestrator | Tuesday 03 February 2026 04:07:20 +0000 (0:00:00.631) 0:03:28.627 ****** 2026-02-03 04:07:25.814833 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:07:25.814841 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:07:25.814848 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:07:25.814855 | orchestrator | 2026-02-03 04:07:25.814863 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-03 04:07:25.814870 | orchestrator | Tuesday 03 February 2026 04:07:22 +0000 (0:00:01.681) 0:03:30.308 ****** 2026-02-03 04:07:25.814878 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:07:25.814904 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:07:25.814911 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:07:25.814919 | orchestrator | 2026-02-03 04:07:25.814926 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-03 04:07:25.814934 | orchestrator | Tuesday 03 February 2026 04:07:23 +0000 (0:00:00.346) 0:03:30.655 ****** 2026-02-03 04:07:25.814976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:25.814987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:25.815001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-03 04:07:25.815009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:25.815024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:07:25.815037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:08.756067 | orchestrator | 2026-02-03 04:08:08.756196 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-03 04:08:08.756219 | orchestrator | Tuesday 03 February 2026 04:07:25 +0000 (0:00:02.331) 0:03:32.986 ****** 2026-02-03 04:08:08.756235 | orchestrator | 2026-02-03 04:08:08.756250 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-03 04:08:08.756263 | orchestrator | Tuesday 03 February 2026 04:07:25 +0000 (0:00:00.156) 0:03:33.142 ****** 2026-02-03 04:08:08.756276 | orchestrator | 2026-02-03 04:08:08.756290 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-03 04:08:08.756305 | orchestrator | Tuesday 03 February 2026 04:07:25 +0000 (0:00:00.165) 0:03:33.307 ****** 2026-02-03 04:08:08.756319 | orchestrator | 2026-02-03 04:08:08.756332 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-03 04:08:08.756348 | orchestrator | Tuesday 03 February 2026 04:07:25 +0000 (0:00:00.143) 0:03:33.451 ****** 2026-02-03 04:08:08.756363 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:08:08.756377 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:08:08.756391 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:08:08.756404 | orchestrator | 2026-02-03 04:08:08.756417 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-03 04:08:08.756430 | orchestrator | Tuesday 03 February 2026 04:07:48 +0000 (0:00:23.060) 0:03:56.511 ****** 2026-02-03 04:08:08.756444 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:08:08.756458 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:08:08.756472 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:08:08.756485 | orchestrator | 2026-02-03 04:08:08.756499 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-03 04:08:08.756513 | orchestrator | 2026-02-03 04:08:08.756526 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-03 04:08:08.756541 | orchestrator | Tuesday 03 February 2026 04:07:57 +0000 (0:00:08.518) 0:04:05.030 ****** 2026-02-03 04:08:08.756558 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:08:08.756573 | orchestrator | 2026-02-03 04:08:08.756607 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-03 04:08:08.756623 | orchestrator | Tuesday 03 February 2026 04:07:58 +0000 (0:00:01.335) 0:04:06.366 ****** 2026-02-03 04:08:08.756637 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:08:08.756680 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:08:08.756695 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:08:08.756710 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:08.756724 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:08.756737 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:08.756750 | orchestrator | 2026-02-03 04:08:08.756763 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-03 04:08:08.756779 | orchestrator | Tuesday 03 February 2026 04:07:59 +0000 (0:00:00.694) 0:04:07.060 ****** 2026-02-03 04:08:08.756793 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:08.756806 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:08.756819 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:08.756832 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:08:08.756848 | orchestrator | 2026-02-03 04:08:08.756861 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-03 04:08:08.756874 | orchestrator | Tuesday 03 February 2026 04:08:00 +0000 (0:00:01.157) 0:04:08.217 ****** 2026-02-03 04:08:08.756888 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-03 04:08:08.756901 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-03 04:08:08.756963 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-03 04:08:08.756977 | orchestrator | 2026-02-03 04:08:08.756990 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-03 04:08:08.757003 | orchestrator | Tuesday 03 February 2026 04:08:01 +0000 (0:00:00.716) 0:04:08.934 ****** 2026-02-03 04:08:08.757017 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-03 04:08:08.757030 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-03 04:08:08.757044 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-03 04:08:08.757058 | orchestrator | 2026-02-03 04:08:08.757071 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-03 04:08:08.757084 | orchestrator | Tuesday 03 February 2026 04:08:02 +0000 (0:00:01.410) 0:04:10.344 ****** 2026-02-03 04:08:08.757097 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-03 04:08:08.757110 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:08:08.757124 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-03 04:08:08.757139 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:08:08.757152 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-03 04:08:08.757165 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:08:08.757195 | orchestrator | 2026-02-03 04:08:08.757209 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-03 04:08:08.757233 | orchestrator | Tuesday 03 February 2026 04:08:03 +0000 (0:00:00.631) 0:04:10.976 ****** 2026-02-03 04:08:08.757247 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 04:08:08.757260 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 04:08:08.757273 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-03 04:08:08.757287 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-03 04:08:08.757300 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-03 04:08:08.757313 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:08.757327 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 04:08:08.757363 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 04:08:08.757378 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:08.757392 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 04:08:08.757405 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-03 04:08:08.757431 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 04:08:08.757445 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:08.757458 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-03 04:08:08.757472 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-03 04:08:08.757486 | orchestrator | 2026-02-03 04:08:08.757500 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-03 04:08:08.757513 | orchestrator | Tuesday 03 February 2026 04:08:04 +0000 (0:00:01.062) 0:04:12.039 ****** 2026-02-03 04:08:08.757526 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:08.757539 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:08.757553 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:08.757566 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:08:08.757579 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:08:08.757592 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:08:08.757605 | orchestrator | 2026-02-03 04:08:08.757618 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-03 04:08:08.757632 | orchestrator | Tuesday 03 February 2026 04:08:05 +0000 (0:00:01.170) 0:04:13.209 ****** 2026-02-03 04:08:08.757645 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:08.757659 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:08.757672 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:08.757685 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:08:08.757700 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:08:08.757713 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:08:08.757726 | orchestrator | 2026-02-03 04:08:08.757740 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-03 04:08:08.757753 | orchestrator | Tuesday 03 February 2026 04:08:07 +0000 (0:00:01.469) 0:04:14.679 ****** 2026-02-03 04:08:08.757777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:08:08.757799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:08:08.757823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:08:10.336970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337066 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337112 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337206 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:10.337239 | orchestrator | 2026-02-03 04:08:10.337248 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-03 04:08:10.337257 | orchestrator | Tuesday 03 February 2026 04:08:09 +0000 (0:00:01.976) 0:04:16.656 ****** 2026-02-03 04:08:10.337265 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:08:10.337274 | orchestrator | 2026-02-03 04:08:10.337282 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-03 04:08:10.337294 | orchestrator | Tuesday 03 February 2026 04:08:10 +0000 (0:00:01.322) 0:04:17.978 ****** 2026-02-03 04:08:13.866494 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:13.866832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:16.055864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:16.056002 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:16.056015 | orchestrator | 2026-02-03 04:08:16.056024 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-03 04:08:16.056032 | orchestrator | Tuesday 03 February 2026 04:08:14 +0000 (0:00:04.158) 0:04:22.137 ****** 2026-02-03 04:08:16.056062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:08:16.056071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:08:16.056078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-03 04:08:16.056103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:08:16.056111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:08:16.056118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-03 04:08:16.056130 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:08:16.056137 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:08:16.056144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:08:16.056151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:08:16.056165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-03 04:08:17.487397 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:08:17.487527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-03 04:08:17.487548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:08:17.487582 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:17.487594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-03 04:08:17.487606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:08:17.487618 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:17.487629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-03 04:08:17.487641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:08:17.487652 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:17.487663 | orchestrator | 2026-02-03 04:08:17.487676 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-03 04:08:17.487689 | orchestrator | Tuesday 03 February 2026 04:08:16 +0000 (0:00:01.564) 0:04:23.701 ****** 2026-02-03 04:08:17.487726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:08:17.487747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:08:17.487760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-03 04:08:17.487773 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:08:17.487784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:08:17.487796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:08:17.487822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:08:25.322196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-03 04:08:25.322295 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:08:25.322310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:08:25.322318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-03 04:08:25.322326 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:08:25.322334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-03 04:08:25.322343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:08:25.322351 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:25.322391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-03 04:08:25.322418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:08:25.322426 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:25.322434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-03 04:08:25.322442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:08:25.322450 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:25.322457 | orchestrator | 2026-02-03 04:08:25.322467 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-03 04:08:25.322476 | orchestrator | Tuesday 03 February 2026 04:08:18 +0000 (0:00:02.190) 0:04:25.892 ****** 2026-02-03 04:08:25.322484 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:25.322491 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:25.322498 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:25.322506 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:08:25.322513 | orchestrator | 2026-02-03 04:08:25.322520 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-03 04:08:25.322527 | orchestrator | Tuesday 03 February 2026 04:08:19 +0000 (0:00:01.139) 0:04:27.031 ****** 2026-02-03 04:08:25.322534 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 04:08:25.322541 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 04:08:25.322549 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 04:08:25.322556 | orchestrator | 2026-02-03 04:08:25.322563 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-03 04:08:25.322570 | orchestrator | Tuesday 03 February 2026 04:08:20 +0000 (0:00:00.978) 0:04:28.010 ****** 2026-02-03 04:08:25.322576 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 04:08:25.322583 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 04:08:25.322590 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 04:08:25.322596 | orchestrator | 2026-02-03 04:08:25.322603 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-03 04:08:25.322617 | orchestrator | Tuesday 03 February 2026 04:08:21 +0000 (0:00:01.167) 0:04:29.177 ****** 2026-02-03 04:08:25.322624 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:08:25.322632 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:08:25.322639 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:08:25.322646 | orchestrator | 2026-02-03 04:08:25.322653 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-03 04:08:25.322660 | orchestrator | Tuesday 03 February 2026 04:08:22 +0000 (0:00:00.561) 0:04:29.739 ****** 2026-02-03 04:08:25.322667 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:08:25.322674 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:08:25.322680 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:08:25.322687 | orchestrator | 2026-02-03 04:08:25.322693 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-03 04:08:25.322700 | orchestrator | Tuesday 03 February 2026 04:08:22 +0000 (0:00:00.563) 0:04:30.303 ****** 2026-02-03 04:08:25.322706 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-03 04:08:25.322714 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-03 04:08:25.322720 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-03 04:08:25.322727 | orchestrator | 2026-02-03 04:08:25.322739 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-03 04:08:25.322747 | orchestrator | Tuesday 03 February 2026 04:08:23 +0000 (0:00:01.185) 0:04:31.488 ****** 2026-02-03 04:08:25.322754 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-03 04:08:25.322761 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-03 04:08:25.322779 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-03 04:08:44.288227 | orchestrator | 2026-02-03 04:08:44.288330 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-03 04:08:44.288342 | orchestrator | Tuesday 03 February 2026 04:08:25 +0000 (0:00:01.471) 0:04:32.960 ****** 2026-02-03 04:08:44.288350 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-03 04:08:44.288357 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-03 04:08:44.288364 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-03 04:08:44.288370 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-03 04:08:44.288377 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-03 04:08:44.288383 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-03 04:08:44.288389 | orchestrator | 2026-02-03 04:08:44.288396 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-03 04:08:44.288402 | orchestrator | Tuesday 03 February 2026 04:08:29 +0000 (0:00:04.021) 0:04:36.982 ****** 2026-02-03 04:08:44.288409 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:08:44.288417 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:08:44.288424 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:08:44.288431 | orchestrator | 2026-02-03 04:08:44.288437 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-03 04:08:44.288444 | orchestrator | Tuesday 03 February 2026 04:08:29 +0000 (0:00:00.361) 0:04:37.344 ****** 2026-02-03 04:08:44.288450 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:08:44.288456 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:08:44.288462 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:08:44.288469 | orchestrator | 2026-02-03 04:08:44.288476 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-03 04:08:44.288482 | orchestrator | Tuesday 03 February 2026 04:08:30 +0000 (0:00:00.327) 0:04:37.672 ****** 2026-02-03 04:08:44.288489 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:08:44.288496 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:08:44.288502 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:08:44.288508 | orchestrator | 2026-02-03 04:08:44.288515 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-03 04:08:44.288542 | orchestrator | Tuesday 03 February 2026 04:08:31 +0000 (0:00:01.545) 0:04:39.218 ****** 2026-02-03 04:08:44.288550 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-03 04:08:44.288558 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-03 04:08:44.288564 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-03 04:08:44.288571 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-03 04:08:44.288577 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-03 04:08:44.288584 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-03 04:08:44.288591 | orchestrator | 2026-02-03 04:08:44.288598 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-03 04:08:44.288606 | orchestrator | Tuesday 03 February 2026 04:08:34 +0000 (0:00:03.240) 0:04:42.458 ****** 2026-02-03 04:08:44.288613 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-03 04:08:44.288620 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-03 04:08:44.288627 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-03 04:08:44.288635 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-03 04:08:44.288642 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:08:44.288649 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-03 04:08:44.288656 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:08:44.288663 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-03 04:08:44.288670 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:08:44.288677 | orchestrator | 2026-02-03 04:08:44.288684 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-03 04:08:44.288692 | orchestrator | Tuesday 03 February 2026 04:08:38 +0000 (0:00:03.429) 0:04:45.888 ****** 2026-02-03 04:08:44.288699 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:08:44.288706 | orchestrator | 2026-02-03 04:08:44.288714 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-03 04:08:44.288722 | orchestrator | Tuesday 03 February 2026 04:08:38 +0000 (0:00:00.158) 0:04:46.046 ****** 2026-02-03 04:08:44.288729 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:08:44.288736 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:08:44.288742 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:08:44.288748 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:44.288754 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:44.288760 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:44.288766 | orchestrator | 2026-02-03 04:08:44.288772 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-03 04:08:44.288778 | orchestrator | Tuesday 03 February 2026 04:08:39 +0000 (0:00:00.887) 0:04:46.933 ****** 2026-02-03 04:08:44.288784 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 04:08:44.288791 | orchestrator | 2026-02-03 04:08:44.288813 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-03 04:08:44.288822 | orchestrator | Tuesday 03 February 2026 04:08:40 +0000 (0:00:00.754) 0:04:47.688 ****** 2026-02-03 04:08:44.288828 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:08:44.288834 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:08:44.288840 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:08:44.288846 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:08:44.288869 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:08:44.288875 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:08:44.288881 | orchestrator | 2026-02-03 04:08:44.288887 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-03 04:08:44.288902 | orchestrator | Tuesday 03 February 2026 04:08:40 +0000 (0:00:00.640) 0:04:48.328 ****** 2026-02-03 04:08:44.288912 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:08:44.288942 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:08:44.288950 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:08:44.288957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:44.288974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.504811 | orchestrator | 2026-02-03 04:08:51.504825 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-03 04:08:51.504838 | orchestrator | Tuesday 03 February 2026 04:08:44 +0000 (0:00:04.126) 0:04:52.454 ****** 2026-02-03 04:08:51.504851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:08:51.504868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:08:51.504896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:08:51.704915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:08:51.705101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:08:51.705118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:08:51.705130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.705185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.705219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:51.705233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.705245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:51.705257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:08:51.705269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.705293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.705305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:08:51.705317 | orchestrator | 2026-02-03 04:08:51.705332 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-03 04:08:51.705352 | orchestrator | Tuesday 03 February 2026 04:08:51 +0000 (0:00:06.895) 0:04:59.350 ****** 2026-02-03 04:09:14.696376 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:09:14.696471 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:09:14.696480 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:14.696484 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:09:14.696489 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:14.696493 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:14.696497 | orchestrator | 2026-02-03 04:09:14.696503 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-03 04:09:14.696508 | orchestrator | Tuesday 03 February 2026 04:08:53 +0000 (0:00:01.652) 0:05:01.002 ****** 2026-02-03 04:09:14.696512 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-03 04:09:14.696517 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-03 04:09:14.696522 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-03 04:09:14.696526 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-03 04:09:14.696530 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-03 04:09:14.696535 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:14.696539 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-03 04:09:14.696543 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-03 04:09:14.696547 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-03 04:09:14.696551 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:14.696556 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-03 04:09:14.696562 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:14.696568 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-03 04:09:14.696599 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-03 04:09:14.696604 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-03 04:09:14.696608 | orchestrator | 2026-02-03 04:09:14.696613 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-03 04:09:14.696617 | orchestrator | Tuesday 03 February 2026 04:08:57 +0000 (0:00:03.953) 0:05:04.955 ****** 2026-02-03 04:09:14.696621 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:09:14.696625 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:09:14.696628 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:09:14.696632 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:14.696636 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:14.696640 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:14.696643 | orchestrator | 2026-02-03 04:09:14.696648 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-03 04:09:14.696654 | orchestrator | Tuesday 03 February 2026 04:08:58 +0000 (0:00:00.834) 0:05:05.790 ****** 2026-02-03 04:09:14.696660 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-03 04:09:14.696667 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-03 04:09:14.696674 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-03 04:09:14.696680 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-03 04:09:14.696687 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-03 04:09:14.696705 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-03 04:09:14.696709 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-03 04:09:14.696713 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-03 04:09:14.696717 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-03 04:09:14.696721 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-03 04:09:14.696724 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:14.696728 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-03 04:09:14.696732 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:14.696736 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-03 04:09:14.696740 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:14.696743 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-03 04:09:14.696747 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-03 04:09:14.696763 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-03 04:09:14.696767 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-03 04:09:14.696771 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-03 04:09:14.696775 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-03 04:09:14.696779 | orchestrator | 2026-02-03 04:09:14.696786 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-03 04:09:14.696798 | orchestrator | Tuesday 03 February 2026 04:09:03 +0000 (0:00:05.699) 0:05:11.489 ****** 2026-02-03 04:09:14.696804 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-03 04:09:14.696810 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-03 04:09:14.696817 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-03 04:09:14.696823 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-03 04:09:14.696830 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-03 04:09:14.696835 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-03 04:09:14.696842 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-03 04:09:14.696847 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-03 04:09:14.696853 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-03 04:09:14.696859 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-03 04:09:14.696864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-03 04:09:14.696873 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-03 04:09:14.696880 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-03 04:09:14.696887 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:14.696892 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-03 04:09:14.696898 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:14.696905 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-03 04:09:14.696911 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-03 04:09:14.696917 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:14.696922 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-03 04:09:14.696928 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-03 04:09:14.696934 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-03 04:09:14.696959 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-03 04:09:14.696964 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-03 04:09:14.696970 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-03 04:09:14.696977 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-03 04:09:14.696989 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-03 04:09:14.696995 | orchestrator | 2026-02-03 04:09:14.697002 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-03 04:09:14.697009 | orchestrator | Tuesday 03 February 2026 04:09:11 +0000 (0:00:07.231) 0:05:18.720 ****** 2026-02-03 04:09:14.697015 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:09:14.697021 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:09:14.697028 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:09:14.697034 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:14.697040 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:14.697047 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:14.697054 | orchestrator | 2026-02-03 04:09:14.697060 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-03 04:09:14.697074 | orchestrator | Tuesday 03 February 2026 04:09:11 +0000 (0:00:00.753) 0:05:19.474 ****** 2026-02-03 04:09:14.697081 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:09:14.697086 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:09:14.697093 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:09:14.697099 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:14.697105 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:14.697111 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:14.697117 | orchestrator | 2026-02-03 04:09:14.697124 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-03 04:09:14.697130 | orchestrator | Tuesday 03 February 2026 04:09:12 +0000 (0:00:00.919) 0:05:20.394 ****** 2026-02-03 04:09:14.697137 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:14.697143 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:14.697149 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:14.697156 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:09:14.697162 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:09:14.697168 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:09:14.697174 | orchestrator | 2026-02-03 04:09:14.697186 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-03 04:09:16.641021 | orchestrator | Tuesday 03 February 2026 04:09:14 +0000 (0:00:01.940) 0:05:22.334 ****** 2026-02-03 04:09:16.641153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:09:16.641178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:09:16.641194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-03 04:09:16.641207 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:09:16.641240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:09:16.641283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:09:16.641317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-03 04:09:16.641329 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:09:16.641341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-03 04:09:16.641353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-03 04:09:16.641370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-03 04:09:16.641399 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:09:16.641412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-03 04:09:16.641433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:09:20.038339 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:20.038429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-03 04:09:20.038442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:09:20.038449 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:20.038456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-03 04:09:20.038463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:09:20.038487 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:20.038494 | orchestrator | 2026-02-03 04:09:20.038502 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-03 04:09:20.038510 | orchestrator | Tuesday 03 February 2026 04:09:16 +0000 (0:00:01.948) 0:05:24.283 ****** 2026-02-03 04:09:20.038529 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-03 04:09:20.038536 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-03 04:09:20.038542 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:09:20.038549 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-03 04:09:20.038555 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-03 04:09:20.038561 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:09:20.038568 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-03 04:09:20.038574 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-03 04:09:20.038580 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:09:20.038586 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-03 04:09:20.038593 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-03 04:09:20.038599 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:09:20.038605 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-03 04:09:20.038612 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-03 04:09:20.038618 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:09:20.038625 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-03 04:09:20.038630 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-03 04:09:20.038637 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:09:20.038642 | orchestrator | 2026-02-03 04:09:20.038649 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-03 04:09:20.038653 | orchestrator | Tuesday 03 February 2026 04:09:17 +0000 (0:00:00.699) 0:05:24.983 ****** 2026-02-03 04:09:20.038670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:09:20.038676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:09:20.038685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-03 04:09:20.038692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:09:20.038696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:09:20.038705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371593 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371740 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:10:05.371774 | orchestrator | 2026-02-03 04:10:05.371779 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-03 04:10:05.371784 | orchestrator | Tuesday 03 February 2026 04:09:20 +0000 (0:00:02.975) 0:05:27.958 ****** 2026-02-03 04:10:05.371788 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:10:05.371793 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:10:05.371797 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:10:05.371801 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:10:05.371805 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:10:05.371809 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:10:05.371813 | orchestrator | 2026-02-03 04:10:05.371817 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-03 04:10:05.371821 | orchestrator | Tuesday 03 February 2026 04:09:21 +0000 (0:00:00.901) 0:05:28.860 ****** 2026-02-03 04:10:05.371825 | orchestrator | 2026-02-03 04:10:05.371829 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-03 04:10:05.371835 | orchestrator | Tuesday 03 February 2026 04:09:21 +0000 (0:00:00.164) 0:05:29.024 ****** 2026-02-03 04:10:05.371839 | orchestrator | 2026-02-03 04:10:05.371844 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-03 04:10:05.371847 | orchestrator | Tuesday 03 February 2026 04:09:21 +0000 (0:00:00.151) 0:05:29.176 ****** 2026-02-03 04:10:05.371851 | orchestrator | 2026-02-03 04:10:05.371855 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-03 04:10:05.371859 | orchestrator | Tuesday 03 February 2026 04:09:21 +0000 (0:00:00.149) 0:05:29.325 ****** 2026-02-03 04:10:05.371863 | orchestrator | 2026-02-03 04:10:05.371867 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-03 04:10:05.371871 | orchestrator | Tuesday 03 February 2026 04:09:21 +0000 (0:00:00.151) 0:05:29.477 ****** 2026-02-03 04:10:05.371875 | orchestrator | 2026-02-03 04:10:05.371879 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-03 04:10:05.371883 | orchestrator | Tuesday 03 February 2026 04:09:21 +0000 (0:00:00.136) 0:05:29.613 ****** 2026-02-03 04:10:05.371887 | orchestrator | 2026-02-03 04:10:05.371890 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-03 04:10:05.371894 | orchestrator | Tuesday 03 February 2026 04:09:22 +0000 (0:00:00.385) 0:05:29.999 ****** 2026-02-03 04:10:05.371898 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:10:05.371902 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:10:05.371906 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:10:05.371910 | orchestrator | 2026-02-03 04:10:05.371914 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-03 04:10:05.371918 | orchestrator | Tuesday 03 February 2026 04:09:29 +0000 (0:00:06.990) 0:05:36.989 ****** 2026-02-03 04:10:05.371921 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:10:05.371925 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:10:05.371929 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:10:05.371937 | orchestrator | 2026-02-03 04:10:05.371941 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-03 04:10:05.371945 | orchestrator | Tuesday 03 February 2026 04:09:43 +0000 (0:00:14.582) 0:05:51.571 ****** 2026-02-03 04:10:05.371949 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:10:05.371953 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:10:05.371957 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:10:05.371982 | orchestrator | 2026-02-03 04:10:05.371995 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-03 04:12:29.781393 | orchestrator | Tuesday 03 February 2026 04:10:05 +0000 (0:00:21.437) 0:06:13.008 ****** 2026-02-03 04:12:29.781494 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:12:29.781506 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:12:29.781513 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:12:29.781520 | orchestrator | 2026-02-03 04:12:29.781528 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-03 04:12:29.781536 | orchestrator | Tuesday 03 February 2026 04:10:48 +0000 (0:00:43.242) 0:06:56.251 ****** 2026-02-03 04:12:29.781542 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:12:29.781549 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:12:29.781555 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:12:29.781562 | orchestrator | 2026-02-03 04:12:29.781569 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-03 04:12:29.781576 | orchestrator | Tuesday 03 February 2026 04:10:49 +0000 (0:00:00.804) 0:06:57.056 ****** 2026-02-03 04:12:29.781582 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:12:29.781589 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:12:29.781596 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:12:29.781602 | orchestrator | 2026-02-03 04:12:29.781609 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-03 04:12:29.781615 | orchestrator | Tuesday 03 February 2026 04:10:50 +0000 (0:00:00.815) 0:06:57.871 ****** 2026-02-03 04:12:29.781622 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:12:29.781628 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:12:29.781634 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:12:29.781641 | orchestrator | 2026-02-03 04:12:29.781647 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-03 04:12:29.781655 | orchestrator | Tuesday 03 February 2026 04:11:15 +0000 (0:00:25.600) 0:07:23.471 ****** 2026-02-03 04:12:29.781662 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:12:29.781668 | orchestrator | 2026-02-03 04:12:29.781675 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-03 04:12:29.781681 | orchestrator | Tuesday 03 February 2026 04:11:15 +0000 (0:00:00.147) 0:07:23.619 ****** 2026-02-03 04:12:29.781687 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:29.781694 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:12:29.781700 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:29.781706 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:29.781713 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:12:29.781721 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-03 04:12:29.781729 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-03 04:12:29.781735 | orchestrator | 2026-02-03 04:12:29.781742 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-03 04:12:29.781749 | orchestrator | Tuesday 03 February 2026 04:11:40 +0000 (0:00:24.388) 0:07:48.007 ****** 2026-02-03 04:12:29.781755 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:12:29.781762 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:29.781768 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:12:29.781775 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:29.781781 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:12:29.781788 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:29.781815 | orchestrator | 2026-02-03 04:12:29.781823 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-03 04:12:29.781829 | orchestrator | Tuesday 03 February 2026 04:11:50 +0000 (0:00:10.250) 0:07:58.257 ****** 2026-02-03 04:12:29.781836 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:12:29.781842 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:12:29.781849 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:29.781855 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:29.781862 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:29.781882 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-03 04:12:29.781889 | orchestrator | 2026-02-03 04:12:29.781896 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-03 04:12:29.781903 | orchestrator | Tuesday 03 February 2026 04:11:55 +0000 (0:00:04.415) 0:08:02.673 ****** 2026-02-03 04:12:29.781910 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-03 04:12:29.781917 | orchestrator | 2026-02-03 04:12:29.781923 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-03 04:12:29.781930 | orchestrator | Tuesday 03 February 2026 04:12:08 +0000 (0:00:13.158) 0:08:15.831 ****** 2026-02-03 04:12:29.781937 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-03 04:12:29.781943 | orchestrator | 2026-02-03 04:12:29.781950 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-03 04:12:29.781957 | orchestrator | Tuesday 03 February 2026 04:12:09 +0000 (0:00:01.669) 0:08:17.501 ****** 2026-02-03 04:12:29.781964 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:12:29.781970 | orchestrator | 2026-02-03 04:12:29.781977 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-03 04:12:29.781983 | orchestrator | Tuesday 03 February 2026 04:12:11 +0000 (0:00:01.689) 0:08:19.190 ****** 2026-02-03 04:12:29.781990 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-03 04:12:29.781997 | orchestrator | 2026-02-03 04:12:29.782003 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-03 04:12:29.782010 | orchestrator | Tuesday 03 February 2026 04:12:23 +0000 (0:00:11.973) 0:08:31.164 ****** 2026-02-03 04:12:29.782090 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:12:29.782099 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:12:29.782105 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:12:29.782111 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:12:29.782117 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:12:29.782123 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:12:29.782130 | orchestrator | 2026-02-03 04:12:29.782136 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-03 04:12:29.782142 | orchestrator | 2026-02-03 04:12:29.782148 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-03 04:12:29.782169 | orchestrator | Tuesday 03 February 2026 04:12:25 +0000 (0:00:01.971) 0:08:33.135 ****** 2026-02-03 04:12:29.782176 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:12:29.782183 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:12:29.782189 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:12:29.782195 | orchestrator | 2026-02-03 04:12:29.782201 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-03 04:12:29.782207 | orchestrator | 2026-02-03 04:12:29.782214 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-03 04:12:29.782220 | orchestrator | Tuesday 03 February 2026 04:12:26 +0000 (0:00:01.342) 0:08:34.478 ****** 2026-02-03 04:12:29.782226 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:29.782232 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:29.782239 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:29.782245 | orchestrator | 2026-02-03 04:12:29.782251 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-03 04:12:29.782257 | orchestrator | 2026-02-03 04:12:29.782264 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-03 04:12:29.782277 | orchestrator | Tuesday 03 February 2026 04:12:27 +0000 (0:00:00.544) 0:08:35.022 ****** 2026-02-03 04:12:29.782284 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-03 04:12:29.782290 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-03 04:12:29.782296 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-03 04:12:29.782303 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-03 04:12:29.782309 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-03 04:12:29.782315 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-03 04:12:29.782321 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:12:29.782327 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-03 04:12:29.782334 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-03 04:12:29.782340 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-03 04:12:29.782346 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-03 04:12:29.782353 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-03 04:12:29.782359 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-03 04:12:29.782366 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:12:29.782372 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-03 04:12:29.782378 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-03 04:12:29.782384 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-03 04:12:29.782390 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-03 04:12:29.782396 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-03 04:12:29.782403 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-03 04:12:29.782409 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:12:29.782415 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-03 04:12:29.782422 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-03 04:12:29.782428 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-03 04:12:29.782434 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-03 04:12:29.782441 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-03 04:12:29.782447 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-03 04:12:29.782453 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:29.782460 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-03 04:12:29.782470 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-03 04:12:29.782477 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-03 04:12:29.782483 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-03 04:12:29.782489 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-03 04:12:29.782495 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-03 04:12:29.782501 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:29.782508 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-03 04:12:29.782514 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-03 04:12:29.782520 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-03 04:12:29.782527 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-03 04:12:29.782533 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-03 04:12:29.782540 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-03 04:12:29.782546 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:29.782552 | orchestrator | 2026-02-03 04:12:29.782558 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-03 04:12:29.782569 | orchestrator | 2026-02-03 04:12:29.782575 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-03 04:12:29.782581 | orchestrator | Tuesday 03 February 2026 04:12:28 +0000 (0:00:01.475) 0:08:36.498 ****** 2026-02-03 04:12:29.782587 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-03 04:12:29.782594 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-03 04:12:29.782600 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:29.782606 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-03 04:12:29.782613 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-03 04:12:29.782619 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:29.782625 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-03 04:12:29.782631 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-03 04:12:29.782638 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:29.782644 | orchestrator | 2026-02-03 04:12:29.782654 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-03 04:12:31.851673 | orchestrator | 2026-02-03 04:12:31.851765 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-03 04:12:31.851782 | orchestrator | Tuesday 03 February 2026 04:12:29 +0000 (0:00:00.924) 0:08:37.422 ****** 2026-02-03 04:12:31.851793 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:31.851805 | orchestrator | 2026-02-03 04:12:31.851816 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-03 04:12:31.851826 | orchestrator | 2026-02-03 04:12:31.851833 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-03 04:12:31.851839 | orchestrator | Tuesday 03 February 2026 04:12:30 +0000 (0:00:00.823) 0:08:38.245 ****** 2026-02-03 04:12:31.851846 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:31.851852 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:31.851858 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:31.851865 | orchestrator | 2026-02-03 04:12:31.851871 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:12:31.851878 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:12:31.851887 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-03 04:12:31.851893 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-03 04:12:31.851900 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-03 04:12:31.851906 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-03 04:12:31.851912 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-03 04:12:31.851918 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-03 04:12:31.851924 | orchestrator | 2026-02-03 04:12:31.851931 | orchestrator | 2026-02-03 04:12:31.851937 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:12:31.851943 | orchestrator | Tuesday 03 February 2026 04:12:31 +0000 (0:00:00.786) 0:08:39.032 ****** 2026-02-03 04:12:31.851949 | orchestrator | =============================================================================== 2026-02-03 04:12:31.851955 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 43.24s 2026-02-03 04:12:31.851961 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.67s 2026-02-03 04:12:31.851991 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.60s 2026-02-03 04:12:31.851998 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.39s 2026-02-03 04:12:31.852004 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.06s 2026-02-03 04:12:31.852011 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.44s 2026-02-03 04:12:31.852072 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.95s 2026-02-03 04:12:31.852081 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.85s 2026-02-03 04:12:31.852087 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.87s 2026-02-03 04:12:31.852093 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.58s 2026-02-03 04:12:31.852099 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.16s 2026-02-03 04:12:31.852106 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.84s 2026-02-03 04:12:31.852112 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.25s 2026-02-03 04:12:31.852118 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.13s 2026-02-03 04:12:31.852124 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.97s 2026-02-03 04:12:31.852135 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.25s 2026-02-03 04:12:31.852145 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.52s 2026-02-03 04:12:31.852155 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.92s 2026-02-03 04:12:31.852166 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.26s 2026-02-03 04:12:31.852177 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.23s 2026-02-03 04:12:34.402441 | orchestrator | 2026-02-03 04:12:34 | INFO  | Task 5f24ac23-882a-4a75-ab6d-9e9bb4075fde (horizon) was prepared for execution. 2026-02-03 04:12:34.402526 | orchestrator | 2026-02-03 04:12:34 | INFO  | It takes a moment until task 5f24ac23-882a-4a75-ab6d-9e9bb4075fde (horizon) has been started and output is visible here. 2026-02-03 04:12:41.931339 | orchestrator | 2026-02-03 04:12:41.931436 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:12:41.931446 | orchestrator | 2026-02-03 04:12:41.931454 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:12:41.931460 | orchestrator | Tuesday 03 February 2026 04:12:38 +0000 (0:00:00.277) 0:00:00.277 ****** 2026-02-03 04:12:41.931467 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:12:41.931474 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:12:41.931480 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:12:41.931486 | orchestrator | 2026-02-03 04:12:41.931492 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:12:41.931499 | orchestrator | Tuesday 03 February 2026 04:12:39 +0000 (0:00:00.348) 0:00:00.626 ****** 2026-02-03 04:12:41.931505 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-03 04:12:41.931511 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-03 04:12:41.931518 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-03 04:12:41.931524 | orchestrator | 2026-02-03 04:12:41.931531 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-03 04:12:41.931537 | orchestrator | 2026-02-03 04:12:41.931543 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-03 04:12:41.931549 | orchestrator | Tuesday 03 February 2026 04:12:39 +0000 (0:00:00.469) 0:00:01.095 ****** 2026-02-03 04:12:41.931555 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:12:41.931562 | orchestrator | 2026-02-03 04:12:41.931568 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-03 04:12:41.931591 | orchestrator | Tuesday 03 February 2026 04:12:40 +0000 (0:00:00.553) 0:00:01.649 ****** 2026-02-03 04:12:41.931615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 04:12:41.931640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 04:12:41.931657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 04:12:41.931664 | orchestrator | 2026-02-03 04:12:41.931671 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-03 04:12:41.931676 | orchestrator | Tuesday 03 February 2026 04:12:41 +0000 (0:00:01.199) 0:00:02.848 ****** 2026-02-03 04:12:41.931682 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:12:41.931688 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:12:41.931694 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:12:41.931700 | orchestrator | 2026-02-03 04:12:41.931706 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-03 04:12:41.931712 | orchestrator | Tuesday 03 February 2026 04:12:41 +0000 (0:00:00.483) 0:00:03.332 ****** 2026-02-03 04:12:41.931722 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-03 04:12:48.436913 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-03 04:12:48.436991 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-03 04:12:48.437000 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-03 04:12:48.437009 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-03 04:12:48.437018 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-03 04:12:48.437102 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-03 04:12:48.437129 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-03 04:12:48.437135 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-03 04:12:48.437141 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-03 04:12:48.437146 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-03 04:12:48.437152 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-03 04:12:48.437157 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-03 04:12:48.437163 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-03 04:12:48.437168 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-03 04:12:48.437173 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-03 04:12:48.437178 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-03 04:12:48.437183 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-03 04:12:48.437188 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-03 04:12:48.437194 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-03 04:12:48.437199 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-03 04:12:48.437204 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-03 04:12:48.437209 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-03 04:12:48.437214 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-03 04:12:48.437221 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-03 04:12:48.437228 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-03 04:12:48.437234 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-03 04:12:48.437250 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-03 04:12:48.437256 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-03 04:12:48.437261 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-03 04:12:48.437266 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-03 04:12:48.437272 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-03 04:12:48.437277 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-03 04:12:48.437284 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-03 04:12:48.437289 | orchestrator | 2026-02-03 04:12:48.437296 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:12:48.437307 | orchestrator | Tuesday 03 February 2026 04:12:42 +0000 (0:00:00.818) 0:00:04.151 ****** 2026-02-03 04:12:48.437312 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:12:48.437319 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:12:48.437324 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:12:48.437329 | orchestrator | 2026-02-03 04:12:48.437335 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:12:48.437340 | orchestrator | Tuesday 03 February 2026 04:12:42 +0000 (0:00:00.316) 0:00:04.467 ****** 2026-02-03 04:12:48.437346 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437352 | orchestrator | 2026-02-03 04:12:48.437368 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:12:48.437374 | orchestrator | Tuesday 03 February 2026 04:12:43 +0000 (0:00:00.370) 0:00:04.838 ****** 2026-02-03 04:12:48.437379 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437384 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:48.437389 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:48.437395 | orchestrator | 2026-02-03 04:12:48.437400 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:12:48.437405 | orchestrator | Tuesday 03 February 2026 04:12:43 +0000 (0:00:00.316) 0:00:05.155 ****** 2026-02-03 04:12:48.437410 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:12:48.437415 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:12:48.437421 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:12:48.437426 | orchestrator | 2026-02-03 04:12:48.437431 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:12:48.437436 | orchestrator | Tuesday 03 February 2026 04:12:43 +0000 (0:00:00.328) 0:00:05.483 ****** 2026-02-03 04:12:48.437441 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437446 | orchestrator | 2026-02-03 04:12:48.437452 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:12:48.437457 | orchestrator | Tuesday 03 February 2026 04:12:44 +0000 (0:00:00.145) 0:00:05.629 ****** 2026-02-03 04:12:48.437463 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437468 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:48.437473 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:48.437479 | orchestrator | 2026-02-03 04:12:48.437485 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:12:48.437491 | orchestrator | Tuesday 03 February 2026 04:12:44 +0000 (0:00:00.316) 0:00:05.945 ****** 2026-02-03 04:12:48.437497 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:12:48.437503 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:12:48.437509 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:12:48.437515 | orchestrator | 2026-02-03 04:12:48.437521 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:12:48.437528 | orchestrator | Tuesday 03 February 2026 04:12:44 +0000 (0:00:00.559) 0:00:06.505 ****** 2026-02-03 04:12:48.437534 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437540 | orchestrator | 2026-02-03 04:12:48.437546 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:12:48.437552 | orchestrator | Tuesday 03 February 2026 04:12:45 +0000 (0:00:00.189) 0:00:06.694 ****** 2026-02-03 04:12:48.437558 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437564 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:48.437571 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:48.437577 | orchestrator | 2026-02-03 04:12:48.437583 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:12:48.437589 | orchestrator | Tuesday 03 February 2026 04:12:45 +0000 (0:00:00.331) 0:00:07.026 ****** 2026-02-03 04:12:48.437595 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:12:48.437601 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:12:48.437607 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:12:48.437613 | orchestrator | 2026-02-03 04:12:48.437620 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:12:48.437629 | orchestrator | Tuesday 03 February 2026 04:12:45 +0000 (0:00:00.343) 0:00:07.369 ****** 2026-02-03 04:12:48.437643 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437653 | orchestrator | 2026-02-03 04:12:48.437661 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:12:48.437670 | orchestrator | Tuesday 03 February 2026 04:12:45 +0000 (0:00:00.159) 0:00:07.529 ****** 2026-02-03 04:12:48.437680 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437689 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:48.437698 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:48.437704 | orchestrator | 2026-02-03 04:12:48.437710 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:12:48.437721 | orchestrator | Tuesday 03 February 2026 04:12:46 +0000 (0:00:00.593) 0:00:08.122 ****** 2026-02-03 04:12:48.437730 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:12:48.437739 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:12:48.437748 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:12:48.437757 | orchestrator | 2026-02-03 04:12:48.437765 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:12:48.437775 | orchestrator | Tuesday 03 February 2026 04:12:46 +0000 (0:00:00.379) 0:00:08.502 ****** 2026-02-03 04:12:48.437782 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437788 | orchestrator | 2026-02-03 04:12:48.437794 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:12:48.437800 | orchestrator | Tuesday 03 February 2026 04:12:47 +0000 (0:00:00.145) 0:00:08.648 ****** 2026-02-03 04:12:48.437807 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437813 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:48.437819 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:48.437825 | orchestrator | 2026-02-03 04:12:48.437831 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:12:48.437838 | orchestrator | Tuesday 03 February 2026 04:12:47 +0000 (0:00:00.307) 0:00:08.956 ****** 2026-02-03 04:12:48.437843 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:12:48.437849 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:12:48.437854 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:12:48.437859 | orchestrator | 2026-02-03 04:12:48.437864 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:12:48.437869 | orchestrator | Tuesday 03 February 2026 04:12:47 +0000 (0:00:00.302) 0:00:09.258 ****** 2026-02-03 04:12:48.437874 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437879 | orchestrator | 2026-02-03 04:12:48.437884 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:12:48.437889 | orchestrator | Tuesday 03 February 2026 04:12:47 +0000 (0:00:00.141) 0:00:09.399 ****** 2026-02-03 04:12:48.437894 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:12:48.437899 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:12:48.437904 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:12:48.437909 | orchestrator | 2026-02-03 04:12:48.437915 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:12:48.437924 | orchestrator | Tuesday 03 February 2026 04:12:48 +0000 (0:00:00.558) 0:00:09.958 ****** 2026-02-03 04:13:02.428248 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:13:02.428338 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:13:02.428346 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:13:02.428353 | orchestrator | 2026-02-03 04:13:02.428360 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:13:02.428371 | orchestrator | Tuesday 03 February 2026 04:12:48 +0000 (0:00:00.349) 0:00:10.307 ****** 2026-02-03 04:13:02.428381 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.428391 | orchestrator | 2026-02-03 04:13:02.428400 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:13:02.428410 | orchestrator | Tuesday 03 February 2026 04:12:48 +0000 (0:00:00.148) 0:00:10.455 ****** 2026-02-03 04:13:02.428419 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.428427 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:13:02.428457 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:13:02.428467 | orchestrator | 2026-02-03 04:13:02.428477 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:13:02.428487 | orchestrator | Tuesday 03 February 2026 04:12:49 +0000 (0:00:00.358) 0:00:10.814 ****** 2026-02-03 04:13:02.428496 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:13:02.428505 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:13:02.428515 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:13:02.428524 | orchestrator | 2026-02-03 04:13:02.428530 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:13:02.428536 | orchestrator | Tuesday 03 February 2026 04:12:49 +0000 (0:00:00.585) 0:00:11.400 ****** 2026-02-03 04:13:02.428542 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.428547 | orchestrator | 2026-02-03 04:13:02.428553 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:13:02.428559 | orchestrator | Tuesday 03 February 2026 04:12:50 +0000 (0:00:00.140) 0:00:11.540 ****** 2026-02-03 04:13:02.428564 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.428570 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:13:02.428575 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:13:02.428580 | orchestrator | 2026-02-03 04:13:02.428586 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:13:02.428592 | orchestrator | Tuesday 03 February 2026 04:12:50 +0000 (0:00:00.341) 0:00:11.882 ****** 2026-02-03 04:13:02.428597 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:13:02.428603 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:13:02.428608 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:13:02.428613 | orchestrator | 2026-02-03 04:13:02.428619 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:13:02.428624 | orchestrator | Tuesday 03 February 2026 04:12:50 +0000 (0:00:00.367) 0:00:12.250 ****** 2026-02-03 04:13:02.428630 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.428635 | orchestrator | 2026-02-03 04:13:02.428641 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:13:02.428646 | orchestrator | Tuesday 03 February 2026 04:12:50 +0000 (0:00:00.144) 0:00:12.395 ****** 2026-02-03 04:13:02.428651 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.428657 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:13:02.428662 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:13:02.428668 | orchestrator | 2026-02-03 04:13:02.428673 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-03 04:13:02.428679 | orchestrator | Tuesday 03 February 2026 04:12:51 +0000 (0:00:00.538) 0:00:12.934 ****** 2026-02-03 04:13:02.428684 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:13:02.428690 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:13:02.428695 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:13:02.428700 | orchestrator | 2026-02-03 04:13:02.428706 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-03 04:13:02.428711 | orchestrator | Tuesday 03 February 2026 04:12:51 +0000 (0:00:00.327) 0:00:13.261 ****** 2026-02-03 04:13:02.428717 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.428722 | orchestrator | 2026-02-03 04:13:02.428728 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-03 04:13:02.428745 | orchestrator | Tuesday 03 February 2026 04:12:51 +0000 (0:00:00.145) 0:00:13.406 ****** 2026-02-03 04:13:02.428751 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.428756 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:13:02.428762 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:13:02.428767 | orchestrator | 2026-02-03 04:13:02.428773 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-03 04:13:02.428778 | orchestrator | Tuesday 03 February 2026 04:12:52 +0000 (0:00:00.398) 0:00:13.805 ****** 2026-02-03 04:13:02.428784 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:13:02.428789 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:13:02.428795 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:13:02.428805 | orchestrator | 2026-02-03 04:13:02.428812 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-03 04:13:02.428818 | orchestrator | Tuesday 03 February 2026 04:12:53 +0000 (0:00:01.628) 0:00:15.433 ****** 2026-02-03 04:13:02.428825 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-03 04:13:02.428833 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-03 04:13:02.428840 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-03 04:13:02.428846 | orchestrator | 2026-02-03 04:13:02.428852 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-03 04:13:02.428858 | orchestrator | Tuesday 03 February 2026 04:12:55 +0000 (0:00:01.967) 0:00:17.401 ****** 2026-02-03 04:13:02.428865 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-03 04:13:02.428873 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-03 04:13:02.428880 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-03 04:13:02.428886 | orchestrator | 2026-02-03 04:13:02.428892 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-03 04:13:02.428911 | orchestrator | Tuesday 03 February 2026 04:12:57 +0000 (0:00:01.943) 0:00:19.345 ****** 2026-02-03 04:13:02.428919 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-03 04:13:02.428926 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-03 04:13:02.428932 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-03 04:13:02.428938 | orchestrator | 2026-02-03 04:13:02.428945 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-03 04:13:02.428951 | orchestrator | Tuesday 03 February 2026 04:12:59 +0000 (0:00:01.539) 0:00:20.885 ****** 2026-02-03 04:13:02.428957 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.428964 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:13:02.428970 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:13:02.428977 | orchestrator | 2026-02-03 04:13:02.428984 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-03 04:13:02.428990 | orchestrator | Tuesday 03 February 2026 04:12:59 +0000 (0:00:00.346) 0:00:21.231 ****** 2026-02-03 04:13:02.428996 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:02.429002 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:13:02.429008 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:13:02.429015 | orchestrator | 2026-02-03 04:13:02.429021 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-03 04:13:02.429028 | orchestrator | Tuesday 03 February 2026 04:13:00 +0000 (0:00:00.536) 0:00:21.768 ****** 2026-02-03 04:13:02.429052 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:13:02.429060 | orchestrator | 2026-02-03 04:13:02.429066 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-03 04:13:02.429073 | orchestrator | Tuesday 03 February 2026 04:13:00 +0000 (0:00:00.643) 0:00:22.412 ****** 2026-02-03 04:13:02.429088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 04:13:02.429110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 04:13:03.326704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 04:13:03.326857 | orchestrator | 2026-02-03 04:13:03.326883 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-03 04:13:03.326900 | orchestrator | Tuesday 03 February 2026 04:13:02 +0000 (0:00:01.530) 0:00:23.943 ****** 2026-02-03 04:13:03.326942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 04:13:03.326972 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:03.327001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 04:13:03.327019 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:13:03.327144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 04:13:05.650850 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:13:05.650923 | orchestrator | 2026-02-03 04:13:05.650931 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-03 04:13:05.650937 | orchestrator | Tuesday 03 February 2026 04:13:03 +0000 (0:00:00.903) 0:00:24.846 ****** 2026-02-03 04:13:05.650959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 04:13:05.650967 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:05.650984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 04:13:05.651005 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:13:05.651011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 04:13:05.651016 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:13:05.651021 | orchestrator | 2026-02-03 04:13:05.651026 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-03 04:13:05.651086 | orchestrator | Tuesday 03 February 2026 04:13:04 +0000 (0:00:00.899) 0:00:25.745 ****** 2026-02-03 04:13:05.651102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 04:13:50.142904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 04:13:50.143026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 04:13:50.143034 | orchestrator | 2026-02-03 04:13:50.143040 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-03 04:13:50.143046 | orchestrator | Tuesday 03 February 2026 04:13:05 +0000 (0:00:01.430) 0:00:27.176 ****** 2026-02-03 04:13:50.143050 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:13:50.143090 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:13:50.143094 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:13:50.143098 | orchestrator | 2026-02-03 04:13:50.143102 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-03 04:13:50.143106 | orchestrator | Tuesday 03 February 2026 04:13:06 +0000 (0:00:00.546) 0:00:27.722 ****** 2026-02-03 04:13:50.143110 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:13:50.143114 | orchestrator | 2026-02-03 04:13:50.143118 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-03 04:13:50.143122 | orchestrator | Tuesday 03 February 2026 04:13:06 +0000 (0:00:00.563) 0:00:28.285 ****** 2026-02-03 04:13:50.143126 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:13:50.143130 | orchestrator | 2026-02-03 04:13:50.143134 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-03 04:13:50.143137 | orchestrator | Tuesday 03 February 2026 04:13:09 +0000 (0:00:02.257) 0:00:30.542 ****** 2026-02-03 04:13:50.143141 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:13:50.143145 | orchestrator | 2026-02-03 04:13:50.143149 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-03 04:13:50.143153 | orchestrator | Tuesday 03 February 2026 04:13:11 +0000 (0:00:02.204) 0:00:32.747 ****** 2026-02-03 04:13:50.143161 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:13:50.143165 | orchestrator | 2026-02-03 04:13:50.143169 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-03 04:13:50.143172 | orchestrator | Tuesday 03 February 2026 04:13:27 +0000 (0:00:16.733) 0:00:49.481 ****** 2026-02-03 04:13:50.143176 | orchestrator | 2026-02-03 04:13:50.143180 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-03 04:13:50.143183 | orchestrator | Tuesday 03 February 2026 04:13:28 +0000 (0:00:00.281) 0:00:49.763 ****** 2026-02-03 04:13:50.143187 | orchestrator | 2026-02-03 04:13:50.143191 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-03 04:13:50.143195 | orchestrator | Tuesday 03 February 2026 04:13:28 +0000 (0:00:00.072) 0:00:49.835 ****** 2026-02-03 04:13:50.143198 | orchestrator | 2026-02-03 04:13:50.143202 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-03 04:13:50.143206 | orchestrator | Tuesday 03 February 2026 04:13:28 +0000 (0:00:00.074) 0:00:49.910 ****** 2026-02-03 04:13:50.143209 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:13:50.143213 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:13:50.143217 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:13:50.143220 | orchestrator | 2026-02-03 04:13:50.143224 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:13:50.143229 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-03 04:13:50.143234 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-03 04:13:50.143238 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-03 04:13:50.143242 | orchestrator | 2026-02-03 04:13:50.143246 | orchestrator | 2026-02-03 04:13:50.143249 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:13:50.143253 | orchestrator | Tuesday 03 February 2026 04:13:50 +0000 (0:00:21.737) 0:01:11.647 ****** 2026-02-03 04:13:50.143257 | orchestrator | =============================================================================== 2026-02-03 04:13:50.143260 | orchestrator | horizon : Restart horizon container ------------------------------------ 21.74s 2026-02-03 04:13:50.143264 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.73s 2026-02-03 04:13:50.143268 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.26s 2026-02-03 04:13:50.143272 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.20s 2026-02-03 04:13:50.143279 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.97s 2026-02-03 04:13:50.143283 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.94s 2026-02-03 04:13:50.143287 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.63s 2026-02-03 04:13:50.143291 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2026-02-03 04:13:50.143295 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.53s 2026-02-03 04:13:50.143298 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.43s 2026-02-03 04:13:50.143302 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2026-02-03 04:13:50.143306 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.90s 2026-02-03 04:13:50.143309 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.90s 2026-02-03 04:13:50.143316 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2026-02-03 04:13:50.560511 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2026-02-03 04:13:50.560604 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2026-02-03 04:13:50.560642 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-02-03 04:13:50.560652 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-02-03 04:13:50.560661 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-02-03 04:13:50.560670 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2026-02-03 04:13:53.052835 | orchestrator | 2026-02-03 04:13:53 | INFO  | Task 959db8e7-cac5-4e86-a3a6-bed5bc0cd6ee (skyline) was prepared for execution. 2026-02-03 04:13:53.052908 | orchestrator | 2026-02-03 04:13:53 | INFO  | It takes a moment until task 959db8e7-cac5-4e86-a3a6-bed5bc0cd6ee (skyline) has been started and output is visible here. 2026-02-03 04:14:24.195914 | orchestrator | 2026-02-03 04:14:24.196034 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:14:24.196051 | orchestrator | 2026-02-03 04:14:24.196064 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:14:24.196110 | orchestrator | Tuesday 03 February 2026 04:13:57 +0000 (0:00:00.270) 0:00:00.270 ****** 2026-02-03 04:14:24.196122 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:14:24.196134 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:14:24.196145 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:14:24.196156 | orchestrator | 2026-02-03 04:14:24.196167 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:14:24.196179 | orchestrator | Tuesday 03 February 2026 04:13:57 +0000 (0:00:00.327) 0:00:00.597 ****** 2026-02-03 04:14:24.196190 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-03 04:14:24.196201 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-03 04:14:24.196212 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-03 04:14:24.196223 | orchestrator | 2026-02-03 04:14:24.196234 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-03 04:14:24.196245 | orchestrator | 2026-02-03 04:14:24.196256 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-03 04:14:24.196267 | orchestrator | Tuesday 03 February 2026 04:13:58 +0000 (0:00:00.470) 0:00:01.068 ****** 2026-02-03 04:14:24.196278 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:14:24.196290 | orchestrator | 2026-02-03 04:14:24.196301 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-03 04:14:24.196312 | orchestrator | Tuesday 03 February 2026 04:13:58 +0000 (0:00:00.576) 0:00:01.645 ****** 2026-02-03 04:14:24.196323 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-03 04:14:24.196334 | orchestrator | 2026-02-03 04:14:24.196345 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-03 04:14:24.196356 | orchestrator | Tuesday 03 February 2026 04:14:02 +0000 (0:00:03.232) 0:00:04.877 ****** 2026-02-03 04:14:24.196367 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-03 04:14:24.196378 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-03 04:14:24.196389 | orchestrator | 2026-02-03 04:14:24.196400 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-03 04:14:24.196411 | orchestrator | Tuesday 03 February 2026 04:14:08 +0000 (0:00:06.525) 0:00:11.402 ****** 2026-02-03 04:14:24.196422 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:14:24.196434 | orchestrator | 2026-02-03 04:14:24.196445 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-03 04:14:24.196459 | orchestrator | Tuesday 03 February 2026 04:14:11 +0000 (0:00:03.274) 0:00:14.677 ****** 2026-02-03 04:14:24.196472 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:14:24.196485 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-03 04:14:24.196524 | orchestrator | 2026-02-03 04:14:24.196538 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-03 04:14:24.196551 | orchestrator | Tuesday 03 February 2026 04:14:15 +0000 (0:00:03.958) 0:00:18.636 ****** 2026-02-03 04:14:24.196562 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:14:24.196572 | orchestrator | 2026-02-03 04:14:24.196583 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-03 04:14:24.196608 | orchestrator | Tuesday 03 February 2026 04:14:19 +0000 (0:00:03.187) 0:00:21.824 ****** 2026-02-03 04:14:24.196619 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-03 04:14:24.196630 | orchestrator | 2026-02-03 04:14:24.196641 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-03 04:14:24.196652 | orchestrator | Tuesday 03 February 2026 04:14:22 +0000 (0:00:03.756) 0:00:25.580 ****** 2026-02-03 04:14:24.196667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:24.196701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:24.196715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:24.196727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:24.196754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:24.196774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:28.190538 | orchestrator | 2026-02-03 04:14:28.190636 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-03 04:14:28.190653 | orchestrator | Tuesday 03 February 2026 04:14:24 +0000 (0:00:01.313) 0:00:26.893 ****** 2026-02-03 04:14:28.190669 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:14:28.190677 | orchestrator | 2026-02-03 04:14:28.190688 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-03 04:14:28.190700 | orchestrator | Tuesday 03 February 2026 04:14:24 +0000 (0:00:00.817) 0:00:27.710 ****** 2026-02-03 04:14:28.190715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:28.190771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:28.190783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:28.190813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:28.190828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:28.190848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:28.190855 | orchestrator | 2026-02-03 04:14:28.190862 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-03 04:14:28.190869 | orchestrator | Tuesday 03 February 2026 04:14:27 +0000 (0:00:02.550) 0:00:30.261 ****** 2026-02-03 04:14:28.190881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-03 04:14:28.190888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-03 04:14:28.190896 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:14:28.190910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-03 04:14:29.503465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-03 04:14:29.503612 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:14:29.503664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-03 04:14:29.503687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-03 04:14:29.503706 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:14:29.503724 | orchestrator | 2026-02-03 04:14:29.503745 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-03 04:14:29.503764 | orchestrator | Tuesday 03 February 2026 04:14:28 +0000 (0:00:00.632) 0:00:30.894 ****** 2026-02-03 04:14:29.503784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-03 04:14:29.503859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-03 04:14:29.503884 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:14:29.503914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-03 04:14:29.503934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-03 04:14:29.503952 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:14:29.503970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-03 04:14:29.504016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-03 04:14:38.177146 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:14:38.177235 | orchestrator | 2026-02-03 04:14:38.177247 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-03 04:14:38.177258 | orchestrator | Tuesday 03 February 2026 04:14:29 +0000 (0:00:01.304) 0:00:32.198 ****** 2026-02-03 04:14:38.177283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:38.177295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:38.177303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:38.177331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:38.177355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:38.177367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:38.177375 | orchestrator | 2026-02-03 04:14:38.177382 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-03 04:14:38.177390 | orchestrator | Tuesday 03 February 2026 04:14:32 +0000 (0:00:02.541) 0:00:34.739 ****** 2026-02-03 04:14:38.177397 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-03 04:14:38.177405 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-03 04:14:38.177413 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-03 04:14:38.177421 | orchestrator | 2026-02-03 04:14:38.177429 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-03 04:14:38.177436 | orchestrator | Tuesday 03 February 2026 04:14:33 +0000 (0:00:01.627) 0:00:36.367 ****** 2026-02-03 04:14:38.177444 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-03 04:14:38.177458 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-03 04:14:38.177466 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-03 04:14:38.177473 | orchestrator | 2026-02-03 04:14:38.177481 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-03 04:14:38.177488 | orchestrator | Tuesday 03 February 2026 04:14:35 +0000 (0:00:02.124) 0:00:38.492 ****** 2026-02-03 04:14:38.177496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:38.177511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:40.230184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:40.231161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:40.231213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:40.231220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:40.231226 | orchestrator | 2026-02-03 04:14:40.231235 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-03 04:14:40.231245 | orchestrator | Tuesday 03 February 2026 04:14:38 +0000 (0:00:02.385) 0:00:40.878 ****** 2026-02-03 04:14:40.231254 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:14:40.231263 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:14:40.231270 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:14:40.231278 | orchestrator | 2026-02-03 04:14:40.231304 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-03 04:14:40.231312 | orchestrator | Tuesday 03 February 2026 04:14:38 +0000 (0:00:00.300) 0:00:41.178 ****** 2026-02-03 04:14:40.231330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:40.231338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:40.231352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:40.231360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:14:40.231379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:15:19.364870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-03 04:15:19.364996 | orchestrator | 2026-02-03 04:15:19.365013 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-03 04:15:19.365025 | orchestrator | Tuesday 03 February 2026 04:14:40 +0000 (0:00:01.753) 0:00:42.932 ****** 2026-02-03 04:15:19.365035 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:15:19.365046 | orchestrator | 2026-02-03 04:15:19.365056 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-03 04:15:19.365066 | orchestrator | Tuesday 03 February 2026 04:14:42 +0000 (0:00:02.140) 0:00:45.072 ****** 2026-02-03 04:15:19.365076 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:15:19.365115 | orchestrator | 2026-02-03 04:15:19.365128 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-03 04:15:19.365138 | orchestrator | Tuesday 03 February 2026 04:14:44 +0000 (0:00:02.326) 0:00:47.398 ****** 2026-02-03 04:15:19.365148 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:15:19.365158 | orchestrator | 2026-02-03 04:15:19.365168 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-03 04:15:19.365178 | orchestrator | Tuesday 03 February 2026 04:14:52 +0000 (0:00:07.757) 0:00:55.156 ****** 2026-02-03 04:15:19.365188 | orchestrator | 2026-02-03 04:15:19.365198 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-03 04:15:19.365208 | orchestrator | Tuesday 03 February 2026 04:14:52 +0000 (0:00:00.068) 0:00:55.224 ****** 2026-02-03 04:15:19.365218 | orchestrator | 2026-02-03 04:15:19.365228 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-03 04:15:19.365238 | orchestrator | Tuesday 03 February 2026 04:14:52 +0000 (0:00:00.070) 0:00:55.295 ****** 2026-02-03 04:15:19.365248 | orchestrator | 2026-02-03 04:15:19.365258 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-03 04:15:19.365268 | orchestrator | Tuesday 03 February 2026 04:14:52 +0000 (0:00:00.070) 0:00:55.366 ****** 2026-02-03 04:15:19.365278 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:15:19.365288 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:15:19.365297 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:15:19.365307 | orchestrator | 2026-02-03 04:15:19.365317 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-03 04:15:19.365327 | orchestrator | Tuesday 03 February 2026 04:15:04 +0000 (0:00:11.472) 0:01:06.839 ****** 2026-02-03 04:15:19.365337 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:15:19.365346 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:15:19.365356 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:15:19.365366 | orchestrator | 2026-02-03 04:15:19.365376 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:15:19.365387 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 04:15:19.365399 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 04:15:19.365412 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 04:15:19.365424 | orchestrator | 2026-02-03 04:15:19.365436 | orchestrator | 2026-02-03 04:15:19.365447 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:15:19.365459 | orchestrator | Tuesday 03 February 2026 04:15:18 +0000 (0:00:14.752) 0:01:21.591 ****** 2026-02-03 04:15:19.365480 | orchestrator | =============================================================================== 2026-02-03 04:15:19.365492 | orchestrator | skyline : Restart skyline-console container ---------------------------- 14.75s 2026-02-03 04:15:19.365504 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.47s 2026-02-03 04:15:19.365515 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.76s 2026-02-03 04:15:19.365541 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.53s 2026-02-03 04:15:19.365552 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.96s 2026-02-03 04:15:19.365564 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.76s 2026-02-03 04:15:19.365576 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.27s 2026-02-03 04:15:19.365588 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.23s 2026-02-03 04:15:19.365617 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.19s 2026-02-03 04:15:19.365629 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.55s 2026-02-03 04:15:19.365640 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.54s 2026-02-03 04:15:19.365652 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.39s 2026-02-03 04:15:19.365663 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.33s 2026-02-03 04:15:19.365674 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.14s 2026-02-03 04:15:19.365686 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.13s 2026-02-03 04:15:19.365697 | orchestrator | skyline : Check skyline container --------------------------------------- 1.75s 2026-02-03 04:15:19.365709 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.63s 2026-02-03 04:15:19.365720 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.31s 2026-02-03 04:15:19.365732 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.30s 2026-02-03 04:15:19.365743 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.82s 2026-02-03 04:15:21.971513 | orchestrator | 2026-02-03 04:15:21 | INFO  | Task 53a78dfc-1840-43d1-bf1c-d6c7055214b0 (glance) was prepared for execution. 2026-02-03 04:15:21.971613 | orchestrator | 2026-02-03 04:15:21 | INFO  | It takes a moment until task 53a78dfc-1840-43d1-bf1c-d6c7055214b0 (glance) has been started and output is visible here. 2026-02-03 04:15:56.337655 | orchestrator | 2026-02-03 04:15:56.338484 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:15:56.338514 | orchestrator | 2026-02-03 04:15:56.338525 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:15:56.338536 | orchestrator | Tuesday 03 February 2026 04:15:26 +0000 (0:00:00.290) 0:00:00.290 ****** 2026-02-03 04:15:56.338545 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:15:56.338556 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:15:56.338565 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:15:56.338574 | orchestrator | 2026-02-03 04:15:56.338582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:15:56.338590 | orchestrator | Tuesday 03 February 2026 04:15:26 +0000 (0:00:00.347) 0:00:00.637 ****** 2026-02-03 04:15:56.338597 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-03 04:15:56.338605 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-03 04:15:56.338613 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-03 04:15:56.338621 | orchestrator | 2026-02-03 04:15:56.338628 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-03 04:15:56.338636 | orchestrator | 2026-02-03 04:15:56.338643 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-03 04:15:56.338672 | orchestrator | Tuesday 03 February 2026 04:15:27 +0000 (0:00:00.476) 0:00:01.114 ****** 2026-02-03 04:15:56.338680 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:15:56.338688 | orchestrator | 2026-02-03 04:15:56.338696 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-03 04:15:56.338704 | orchestrator | Tuesday 03 February 2026 04:15:27 +0000 (0:00:00.573) 0:00:01.687 ****** 2026-02-03 04:15:56.338711 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-03 04:15:56.338719 | orchestrator | 2026-02-03 04:15:56.338726 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-03 04:15:56.338733 | orchestrator | Tuesday 03 February 2026 04:15:31 +0000 (0:00:03.392) 0:00:05.080 ****** 2026-02-03 04:15:56.338741 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-03 04:15:56.338749 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-03 04:15:56.338757 | orchestrator | 2026-02-03 04:15:56.338765 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-03 04:15:56.338772 | orchestrator | Tuesday 03 February 2026 04:15:37 +0000 (0:00:06.647) 0:00:11.727 ****** 2026-02-03 04:15:56.338781 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:15:56.338789 | orchestrator | 2026-02-03 04:15:56.338796 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-03 04:15:56.338804 | orchestrator | Tuesday 03 February 2026 04:15:41 +0000 (0:00:03.251) 0:00:14.979 ****** 2026-02-03 04:15:56.338811 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:15:56.338819 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-03 04:15:56.338826 | orchestrator | 2026-02-03 04:15:56.338834 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-03 04:15:56.338841 | orchestrator | Tuesday 03 February 2026 04:15:45 +0000 (0:00:04.011) 0:00:18.991 ****** 2026-02-03 04:15:56.338849 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:15:56.338856 | orchestrator | 2026-02-03 04:15:56.338877 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-03 04:15:56.338884 | orchestrator | Tuesday 03 February 2026 04:15:48 +0000 (0:00:03.201) 0:00:22.192 ****** 2026-02-03 04:15:56.338892 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-03 04:15:56.338899 | orchestrator | 2026-02-03 04:15:56.338907 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-03 04:15:56.338914 | orchestrator | Tuesday 03 February 2026 04:15:52 +0000 (0:00:03.823) 0:00:26.016 ****** 2026-02-03 04:15:56.338948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:15:56.338966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:15:56.338980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:15:56.338989 | orchestrator | 2026-02-03 04:15:56.338997 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-03 04:15:56.339004 | orchestrator | Tuesday 03 February 2026 04:15:55 +0000 (0:00:03.374) 0:00:29.390 ****** 2026-02-03 04:15:56.339017 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:15:56.339025 | orchestrator | 2026-02-03 04:15:56.339037 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-03 04:16:11.954507 | orchestrator | Tuesday 03 February 2026 04:15:56 +0000 (0:00:00.765) 0:00:30.156 ****** 2026-02-03 04:16:11.954661 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:16:11.954688 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:16:11.954707 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:16:11.954728 | orchestrator | 2026-02-03 04:16:11.954750 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-03 04:16:11.954770 | orchestrator | Tuesday 03 February 2026 04:16:00 +0000 (0:00:03.691) 0:00:33.847 ****** 2026-02-03 04:16:11.954788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-03 04:16:11.954802 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-03 04:16:11.954813 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-03 04:16:11.954825 | orchestrator | 2026-02-03 04:16:11.954837 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-03 04:16:11.954848 | orchestrator | Tuesday 03 February 2026 04:16:01 +0000 (0:00:01.585) 0:00:35.433 ****** 2026-02-03 04:16:11.954860 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-03 04:16:11.954871 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-03 04:16:11.954882 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-03 04:16:11.954893 | orchestrator | 2026-02-03 04:16:11.954904 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-03 04:16:11.954915 | orchestrator | Tuesday 03 February 2026 04:16:03 +0000 (0:00:01.433) 0:00:36.867 ****** 2026-02-03 04:16:11.954927 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:16:11.954938 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:16:11.954949 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:16:11.954960 | orchestrator | 2026-02-03 04:16:11.954971 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-03 04:16:11.954983 | orchestrator | Tuesday 03 February 2026 04:16:03 +0000 (0:00:00.717) 0:00:37.584 ****** 2026-02-03 04:16:11.954994 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:11.955005 | orchestrator | 2026-02-03 04:16:11.955016 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-03 04:16:11.955028 | orchestrator | Tuesday 03 February 2026 04:16:03 +0000 (0:00:00.135) 0:00:37.720 ****** 2026-02-03 04:16:11.955041 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:11.955054 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:11.955067 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:11.955079 | orchestrator | 2026-02-03 04:16:11.955093 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-03 04:16:11.955137 | orchestrator | Tuesday 03 February 2026 04:16:04 +0000 (0:00:00.304) 0:00:38.025 ****** 2026-02-03 04:16:11.955152 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:16:11.955165 | orchestrator | 2026-02-03 04:16:11.955178 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-03 04:16:11.955191 | orchestrator | Tuesday 03 February 2026 04:16:05 +0000 (0:00:00.830) 0:00:38.855 ****** 2026-02-03 04:16:11.955229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:16:11.955298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:16:11.955321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:16:11.955343 | orchestrator | 2026-02-03 04:16:11.955356 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-03 04:16:11.955369 | orchestrator | Tuesday 03 February 2026 04:16:08 +0000 (0:00:03.869) 0:00:42.725 ****** 2026-02-03 04:16:11.955392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 04:16:15.686526 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:15.686668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 04:16:15.686734 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:15.686760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 04:16:15.686782 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:15.686804 | orchestrator | 2026-02-03 04:16:15.686825 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-03 04:16:15.686838 | orchestrator | Tuesday 03 February 2026 04:16:11 +0000 (0:00:03.045) 0:00:45.771 ****** 2026-02-03 04:16:15.686888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 04:16:15.686924 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:15.686943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 04:16:15.686963 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:15.686995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 04:16:52.841874 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:52.841980 | orchestrator | 2026-02-03 04:16:52.841995 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-03 04:16:52.842078 | orchestrator | Tuesday 03 February 2026 04:16:15 +0000 (0:00:03.735) 0:00:49.506 ****** 2026-02-03 04:16:52.842090 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:52.842099 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:52.842108 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:52.842169 | orchestrator | 2026-02-03 04:16:52.842181 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-03 04:16:52.842190 | orchestrator | Tuesday 03 February 2026 04:16:19 +0000 (0:00:03.622) 0:00:53.129 ****** 2026-02-03 04:16:52.842215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:16:52.842229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:16:52.842261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:16:52.842280 | orchestrator | 2026-02-03 04:16:52.842289 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-03 04:16:52.842298 | orchestrator | Tuesday 03 February 2026 04:16:23 +0000 (0:00:04.104) 0:00:57.233 ****** 2026-02-03 04:16:52.842307 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:16:52.842316 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:16:52.842325 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:16:52.842333 | orchestrator | 2026-02-03 04:16:52.842342 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-03 04:16:52.842351 | orchestrator | Tuesday 03 February 2026 04:16:29 +0000 (0:00:05.694) 0:01:02.928 ****** 2026-02-03 04:16:52.842360 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:52.842369 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:52.842377 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:52.842386 | orchestrator | 2026-02-03 04:16:52.842395 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-03 04:16:52.842403 | orchestrator | Tuesday 03 February 2026 04:16:32 +0000 (0:00:03.681) 0:01:06.610 ****** 2026-02-03 04:16:52.842412 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:52.842421 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:52.842431 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:52.842441 | orchestrator | 2026-02-03 04:16:52.842451 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-03 04:16:52.842460 | orchestrator | Tuesday 03 February 2026 04:16:36 +0000 (0:00:03.471) 0:01:10.081 ****** 2026-02-03 04:16:52.842470 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:52.842480 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:52.842490 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:52.842500 | orchestrator | 2026-02-03 04:16:52.842510 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-03 04:16:52.842521 | orchestrator | Tuesday 03 February 2026 04:16:40 +0000 (0:00:04.410) 0:01:14.492 ****** 2026-02-03 04:16:52.842531 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:52.842541 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:52.842551 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:52.842561 | orchestrator | 2026-02-03 04:16:52.842571 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-03 04:16:52.842581 | orchestrator | Tuesday 03 February 2026 04:16:44 +0000 (0:00:03.655) 0:01:18.147 ****** 2026-02-03 04:16:52.842598 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:52.842607 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:52.842615 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:52.842624 | orchestrator | 2026-02-03 04:16:52.842633 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-03 04:16:52.842642 | orchestrator | Tuesday 03 February 2026 04:16:44 +0000 (0:00:00.574) 0:01:18.722 ****** 2026-02-03 04:16:52.842651 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-03 04:16:52.842662 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:16:52.842671 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-03 04:16:52.842679 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:16:52.842688 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-03 04:16:52.842697 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:16:52.842706 | orchestrator | 2026-02-03 04:16:52.842714 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-03 04:16:52.842723 | orchestrator | Tuesday 03 February 2026 04:16:48 +0000 (0:00:03.411) 0:01:22.133 ****** 2026-02-03 04:16:52.842732 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:16:52.842741 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:16:52.842750 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:16:52.842758 | orchestrator | 2026-02-03 04:16:52.842767 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-03 04:16:52.842782 | orchestrator | Tuesday 03 February 2026 04:16:52 +0000 (0:00:04.524) 0:01:26.658 ****** 2026-02-03 04:18:07.770602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:18:07.770724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:18:07.770798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 04:18:07.770824 | orchestrator | 2026-02-03 04:18:07.770854 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-03 04:18:07.770877 | orchestrator | Tuesday 03 February 2026 04:16:56 +0000 (0:00:03.936) 0:01:30.595 ****** 2026-02-03 04:18:07.770895 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:18:07.770914 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:18:07.770933 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:18:07.770951 | orchestrator | 2026-02-03 04:18:07.770972 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-03 04:18:07.770991 | orchestrator | Tuesday 03 February 2026 04:16:57 +0000 (0:00:00.555) 0:01:31.151 ****** 2026-02-03 04:18:07.771010 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:18:07.771029 | orchestrator | 2026-02-03 04:18:07.771049 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-03 04:18:07.771068 | orchestrator | Tuesday 03 February 2026 04:16:59 +0000 (0:00:02.241) 0:01:33.393 ****** 2026-02-03 04:18:07.771088 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:18:07.771101 | orchestrator | 2026-02-03 04:18:07.771112 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-03 04:18:07.771136 | orchestrator | Tuesday 03 February 2026 04:17:01 +0000 (0:00:02.351) 0:01:35.744 ****** 2026-02-03 04:18:07.771177 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:18:07.771194 | orchestrator | 2026-02-03 04:18:07.771205 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-03 04:18:07.771216 | orchestrator | Tuesday 03 February 2026 04:17:04 +0000 (0:00:02.106) 0:01:37.850 ****** 2026-02-03 04:18:07.771227 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:18:07.771238 | orchestrator | 2026-02-03 04:18:07.771249 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-03 04:18:07.771260 | orchestrator | Tuesday 03 February 2026 04:17:32 +0000 (0:00:28.744) 0:02:06.595 ****** 2026-02-03 04:18:07.771271 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:18:07.771282 | orchestrator | 2026-02-03 04:18:07.771294 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-03 04:18:07.771305 | orchestrator | Tuesday 03 February 2026 04:17:34 +0000 (0:00:02.220) 0:02:08.815 ****** 2026-02-03 04:18:07.771316 | orchestrator | 2026-02-03 04:18:07.771327 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-03 04:18:07.771338 | orchestrator | Tuesday 03 February 2026 04:17:35 +0000 (0:00:00.085) 0:02:08.901 ****** 2026-02-03 04:18:07.771349 | orchestrator | 2026-02-03 04:18:07.771360 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-03 04:18:07.771371 | orchestrator | Tuesday 03 February 2026 04:17:35 +0000 (0:00:00.080) 0:02:08.981 ****** 2026-02-03 04:18:07.771382 | orchestrator | 2026-02-03 04:18:07.771393 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-03 04:18:07.771404 | orchestrator | Tuesday 03 February 2026 04:17:35 +0000 (0:00:00.083) 0:02:09.065 ****** 2026-02-03 04:18:07.771415 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:18:07.771426 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:18:07.771437 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:18:07.771448 | orchestrator | 2026-02-03 04:18:07.771459 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:18:07.771471 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-03 04:18:07.771483 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-03 04:18:07.771494 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-03 04:18:07.771506 | orchestrator | 2026-02-03 04:18:07.771517 | orchestrator | 2026-02-03 04:18:07.771528 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:18:07.771539 | orchestrator | Tuesday 03 February 2026 04:18:07 +0000 (0:00:32.507) 0:02:41.573 ****** 2026-02-03 04:18:07.771550 | orchestrator | =============================================================================== 2026-02-03 04:18:07.771561 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.51s 2026-02-03 04:18:07.771572 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.74s 2026-02-03 04:18:07.771583 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.65s 2026-02-03 04:18:07.771607 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.69s 2026-02-03 04:18:08.137776 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.52s 2026-02-03 04:18:08.137859 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.41s 2026-02-03 04:18:08.137870 | orchestrator | glance : Copying over config.json files for services -------------------- 4.10s 2026-02-03 04:18:08.137875 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.01s 2026-02-03 04:18:08.137891 | orchestrator | glance : Check glance containers ---------------------------------------- 3.94s 2026-02-03 04:18:08.137926 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.87s 2026-02-03 04:18:08.137931 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.82s 2026-02-03 04:18:08.137936 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.74s 2026-02-03 04:18:08.137941 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.69s 2026-02-03 04:18:08.137945 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.68s 2026-02-03 04:18:08.137950 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.66s 2026-02-03 04:18:08.137955 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.62s 2026-02-03 04:18:08.137960 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.47s 2026-02-03 04:18:08.137965 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.41s 2026-02-03 04:18:08.137969 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.39s 2026-02-03 04:18:08.137974 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.37s 2026-02-03 04:18:10.586381 | orchestrator | 2026-02-03 04:18:10 | INFO  | Task a68450d2-2bb3-4a97-96e9-cdaae37a5b2d (cinder) was prepared for execution. 2026-02-03 04:18:10.586477 | orchestrator | 2026-02-03 04:18:10 | INFO  | It takes a moment until task a68450d2-2bb3-4a97-96e9-cdaae37a5b2d (cinder) has been started and output is visible here. 2026-02-03 04:18:46.724949 | orchestrator | 2026-02-03 04:18:46.725110 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:18:46.725139 | orchestrator | 2026-02-03 04:18:46.725235 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:18:46.725404 | orchestrator | Tuesday 03 February 2026 04:18:15 +0000 (0:00:00.268) 0:00:00.268 ****** 2026-02-03 04:18:46.725425 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:18:46.725445 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:18:46.725464 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:18:46.725483 | orchestrator | 2026-02-03 04:18:46.725503 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:18:46.725522 | orchestrator | Tuesday 03 February 2026 04:18:15 +0000 (0:00:00.352) 0:00:00.620 ****** 2026-02-03 04:18:46.725540 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-03 04:18:46.725559 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-03 04:18:46.725578 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-03 04:18:46.725596 | orchestrator | 2026-02-03 04:18:46.725614 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-03 04:18:46.725632 | orchestrator | 2026-02-03 04:18:46.725651 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-03 04:18:46.725669 | orchestrator | Tuesday 03 February 2026 04:18:15 +0000 (0:00:00.479) 0:00:01.099 ****** 2026-02-03 04:18:46.725687 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:18:46.725707 | orchestrator | 2026-02-03 04:18:46.725724 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-03 04:18:46.725742 | orchestrator | Tuesday 03 February 2026 04:18:16 +0000 (0:00:00.646) 0:00:01.746 ****** 2026-02-03 04:18:46.725760 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-03 04:18:46.725779 | orchestrator | 2026-02-03 04:18:46.725797 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-03 04:18:46.725816 | orchestrator | Tuesday 03 February 2026 04:18:20 +0000 (0:00:03.549) 0:00:05.295 ****** 2026-02-03 04:18:46.725836 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-03 04:18:46.725855 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-03 04:18:46.725911 | orchestrator | 2026-02-03 04:18:46.725931 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-03 04:18:46.725947 | orchestrator | Tuesday 03 February 2026 04:18:26 +0000 (0:00:06.665) 0:00:11.961 ****** 2026-02-03 04:18:46.725965 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:18:46.725984 | orchestrator | 2026-02-03 04:18:46.726002 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-03 04:18:46.726096 | orchestrator | Tuesday 03 February 2026 04:18:30 +0000 (0:00:03.303) 0:00:15.264 ****** 2026-02-03 04:18:46.726119 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:18:46.726137 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-03 04:18:46.726155 | orchestrator | 2026-02-03 04:18:46.726202 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-03 04:18:46.726221 | orchestrator | Tuesday 03 February 2026 04:18:34 +0000 (0:00:03.999) 0:00:19.263 ****** 2026-02-03 04:18:46.726238 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:18:46.726255 | orchestrator | 2026-02-03 04:18:46.726272 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-03 04:18:46.726290 | orchestrator | Tuesday 03 February 2026 04:18:37 +0000 (0:00:03.270) 0:00:22.534 ****** 2026-02-03 04:18:46.726309 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-03 04:18:46.726327 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-03 04:18:46.726345 | orchestrator | 2026-02-03 04:18:46.726363 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-03 04:18:46.726380 | orchestrator | Tuesday 03 February 2026 04:18:44 +0000 (0:00:07.341) 0:00:29.875 ****** 2026-02-03 04:18:46.726512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:18:46.726575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:18:46.726597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:18:46.726636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:18:46.726656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:18:46.726685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:18:46.726706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:18:46.726740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:18:52.898078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:18:52.898286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:18:52.898308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:18:52.898336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:18:52.898350 | orchestrator | 2026-02-03 04:18:52.898364 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-03 04:18:52.898377 | orchestrator | Tuesday 03 February 2026 04:18:46 +0000 (0:00:02.131) 0:00:32.007 ****** 2026-02-03 04:18:52.898389 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:18:52.898401 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:18:52.898412 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:18:52.898423 | orchestrator | 2026-02-03 04:18:52.898435 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-03 04:18:52.898446 | orchestrator | Tuesday 03 February 2026 04:18:47 +0000 (0:00:00.334) 0:00:32.341 ****** 2026-02-03 04:18:52.898458 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:18:52.898469 | orchestrator | 2026-02-03 04:18:52.898481 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-03 04:18:52.898492 | orchestrator | Tuesday 03 February 2026 04:18:47 +0000 (0:00:00.858) 0:00:33.200 ****** 2026-02-03 04:18:52.898504 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-03 04:18:52.898516 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-03 04:18:52.898536 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-03 04:18:52.898548 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-03 04:18:52.898562 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-03 04:18:52.898575 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-03 04:18:52.898588 | orchestrator | 2026-02-03 04:18:52.898601 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-03 04:18:52.898614 | orchestrator | Tuesday 03 February 2026 04:18:49 +0000 (0:00:01.725) 0:00:34.925 ****** 2026-02-03 04:18:52.898650 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-03 04:18:52.898665 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-03 04:18:52.898685 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-03 04:18:52.898699 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-03 04:18:52.898722 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-03 04:19:04.028965 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-03 04:19:04.029057 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-03 04:19:04.029084 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-03 04:19:04.029091 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-03 04:19:04.029098 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-03 04:19:04.029140 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-03 04:19:04.029147 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-03 04:19:04.029154 | orchestrator | 2026-02-03 04:19:04.029163 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-03 04:19:04.029214 | orchestrator | Tuesday 03 February 2026 04:18:53 +0000 (0:00:03.478) 0:00:38.404 ****** 2026-02-03 04:19:04.029220 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-03 04:19:04.029228 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-03 04:19:04.029234 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-03 04:19:04.029239 | orchestrator | 2026-02-03 04:19:04.029245 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-03 04:19:04.029251 | orchestrator | Tuesday 03 February 2026 04:18:54 +0000 (0:00:01.535) 0:00:39.939 ****** 2026-02-03 04:19:04.029258 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-03 04:19:04.029264 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-03 04:19:04.029270 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-03 04:19:04.029282 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-03 04:19:04.029288 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-03 04:19:04.029294 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-03 04:19:04.029301 | orchestrator | 2026-02-03 04:19:04.029309 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-03 04:19:04.029316 | orchestrator | Tuesday 03 February 2026 04:18:57 +0000 (0:00:02.822) 0:00:42.762 ****** 2026-02-03 04:19:04.029323 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-03 04:19:04.029336 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-03 04:19:04.029342 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-03 04:19:04.029348 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-03 04:19:04.029353 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-03 04:19:04.029359 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-03 04:19:04.029365 | orchestrator | 2026-02-03 04:19:04.029371 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-03 04:19:04.029377 | orchestrator | Tuesday 03 February 2026 04:18:58 +0000 (0:00:01.037) 0:00:43.799 ****** 2026-02-03 04:19:04.029383 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:19:04.029389 | orchestrator | 2026-02-03 04:19:04.029395 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-03 04:19:04.029401 | orchestrator | Tuesday 03 February 2026 04:18:58 +0000 (0:00:00.149) 0:00:43.948 ****** 2026-02-03 04:19:04.029407 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:19:04.029413 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:19:04.029419 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:19:04.029425 | orchestrator | 2026-02-03 04:19:04.029431 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-03 04:19:04.029436 | orchestrator | Tuesday 03 February 2026 04:18:59 +0000 (0:00:00.566) 0:00:44.515 ****** 2026-02-03 04:19:04.029443 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:19:04.029449 | orchestrator | 2026-02-03 04:19:04.029455 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-03 04:19:04.029461 | orchestrator | Tuesday 03 February 2026 04:18:59 +0000 (0:00:00.617) 0:00:45.132 ****** 2026-02-03 04:19:04.029475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:04.983588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:04.983695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:04.983725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:04.983735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:04.983743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:04.983766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:04.983776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:04.983788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:04.983802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:04.983810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:04.983818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:04.983826 | orchestrator | 2026-02-03 04:19:04.983835 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-03 04:19:04.983844 | orchestrator | Tuesday 03 February 2026 04:19:04 +0000 (0:00:04.189) 0:00:49.322 ****** 2026-02-03 04:19:04.983858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 04:19:05.094459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.094571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.094590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.094604 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:19:05.094619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 04:19:05.094632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.094662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.094701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.094713 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:19:05.094725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 04:19:05.094737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.094749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.094761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.094781 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:19:05.094793 | orchestrator | 2026-02-03 04:19:05.094806 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-03 04:19:05.094826 | orchestrator | Tuesday 03 February 2026 04:19:05 +0000 (0:00:00.955) 0:00:50.277 ****** 2026-02-03 04:19:05.676601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 04:19:05.676697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.676712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.676724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.676735 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:19:05.676747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 04:19:05.676802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.676818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.676829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.676837 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:19:05.676847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 04:19:05.676857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:19:05.676881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 04:19:10.109212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 04:19:10.109296 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:19:10.109307 | orchestrator | 2026-02-03 04:19:10.109316 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-03 04:19:10.109324 | orchestrator | Tuesday 03 February 2026 04:19:05 +0000 (0:00:00.891) 0:00:51.169 ****** 2026-02-03 04:19:10.109332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:10.109341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:10.109348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:10.109385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:10.109398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:10.109406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:10.109412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:10.109421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:10.109433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:10.109444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:23.401015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:23.401120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:23.401136 | orchestrator | 2026-02-03 04:19:23.401151 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-03 04:19:23.401164 | orchestrator | Tuesday 03 February 2026 04:19:10 +0000 (0:00:04.220) 0:00:55.389 ****** 2026-02-03 04:19:23.401275 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-03 04:19:23.401291 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-03 04:19:23.401302 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-03 04:19:23.401313 | orchestrator | 2026-02-03 04:19:23.401324 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-03 04:19:23.401335 | orchestrator | Tuesday 03 February 2026 04:19:12 +0000 (0:00:01.944) 0:00:57.334 ****** 2026-02-03 04:19:23.401347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:23.401397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:23.401450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:23.401470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:23.401490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:23.401509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:23.401545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:23.401566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:23.401602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:25.644108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:25.644277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:25.644316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:25.644327 | orchestrator | 2026-02-03 04:19:25.644338 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-03 04:19:25.644349 | orchestrator | Tuesday 03 February 2026 04:19:23 +0000 (0:00:11.341) 0:01:08.675 ****** 2026-02-03 04:19:25.644358 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:19:25.644368 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:19:25.644377 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:19:25.644385 | orchestrator | 2026-02-03 04:19:25.644394 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-03 04:19:25.644403 | orchestrator | Tuesday 03 February 2026 04:19:25 +0000 (0:00:01.534) 0:01:10.210 ****** 2026-02-03 04:19:25.644414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 04:19:25.644438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:19:25.644464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 04:19:25.644475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 04:19:25.644492 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:19:25.644502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 04:19:25.644511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:19:25.644520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 04:19:25.644546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 04:19:29.210719 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:19:29.210804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-03 04:19:29.210838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:19:29.210847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 04:19:29.210855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 04:19:29.210862 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:19:29.210869 | orchestrator | 2026-02-03 04:19:29.210875 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-03 04:19:29.210882 | orchestrator | Tuesday 03 February 2026 04:19:25 +0000 (0:00:00.710) 0:01:10.921 ****** 2026-02-03 04:19:29.210888 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:19:29.210895 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:19:29.210902 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:19:29.210908 | orchestrator | 2026-02-03 04:19:29.210914 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-03 04:19:29.210921 | orchestrator | Tuesday 03 February 2026 04:19:26 +0000 (0:00:00.593) 0:01:11.514 ****** 2026-02-03 04:19:29.210955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:29.210969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:29.210975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-03 04:19:29.210981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:29.210999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:29.211009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:19:29.211022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:04.073339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:04.073423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:04.073432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:04.073437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:04.073455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:04.073476 | orchestrator | 2026-02-03 04:21:04.073482 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-03 04:21:04.073488 | orchestrator | Tuesday 03 February 2026 04:19:29 +0000 (0:00:02.992) 0:01:14.506 ****** 2026-02-03 04:21:04.073493 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:21:04.073499 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:21:04.073504 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:21:04.073508 | orchestrator | 2026-02-03 04:21:04.073513 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-03 04:21:04.073517 | orchestrator | Tuesday 03 February 2026 04:19:29 +0000 (0:00:00.325) 0:01:14.832 ****** 2026-02-03 04:21:04.073522 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:21:04.073527 | orchestrator | 2026-02-03 04:21:04.073543 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-03 04:21:04.073548 | orchestrator | Tuesday 03 February 2026 04:19:31 +0000 (0:00:02.085) 0:01:16.917 ****** 2026-02-03 04:21:04.073552 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:21:04.073557 | orchestrator | 2026-02-03 04:21:04.073562 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-03 04:21:04.073566 | orchestrator | Tuesday 03 February 2026 04:19:33 +0000 (0:00:02.255) 0:01:19.173 ****** 2026-02-03 04:21:04.073570 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:21:04.073575 | orchestrator | 2026-02-03 04:21:04.073579 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-03 04:21:04.073583 | orchestrator | Tuesday 03 February 2026 04:19:53 +0000 (0:00:19.534) 0:01:38.707 ****** 2026-02-03 04:21:04.073588 | orchestrator | 2026-02-03 04:21:04.073592 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-03 04:21:04.073597 | orchestrator | Tuesday 03 February 2026 04:19:53 +0000 (0:00:00.280) 0:01:38.988 ****** 2026-02-03 04:21:04.073601 | orchestrator | 2026-02-03 04:21:04.073605 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-03 04:21:04.073610 | orchestrator | Tuesday 03 February 2026 04:19:53 +0000 (0:00:00.070) 0:01:39.059 ****** 2026-02-03 04:21:04.073614 | orchestrator | 2026-02-03 04:21:04.073619 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-03 04:21:04.073623 | orchestrator | Tuesday 03 February 2026 04:19:53 +0000 (0:00:00.072) 0:01:39.132 ****** 2026-02-03 04:21:04.073627 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:21:04.073640 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:21:04.073649 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:21:04.073654 | orchestrator | 2026-02-03 04:21:04.073659 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-03 04:21:04.073663 | orchestrator | Tuesday 03 February 2026 04:20:23 +0000 (0:00:29.932) 0:02:09.064 ****** 2026-02-03 04:21:04.073667 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:21:04.073672 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:21:04.073676 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:21:04.073681 | orchestrator | 2026-02-03 04:21:04.073685 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-03 04:21:04.073690 | orchestrator | Tuesday 03 February 2026 04:20:34 +0000 (0:00:10.360) 0:02:19.425 ****** 2026-02-03 04:21:04.073694 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:21:04.073698 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:21:04.073703 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:21:04.073707 | orchestrator | 2026-02-03 04:21:04.073712 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-03 04:21:04.073716 | orchestrator | Tuesday 03 February 2026 04:20:57 +0000 (0:00:23.010) 0:02:42.436 ****** 2026-02-03 04:21:04.073720 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:21:04.073732 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:21:04.073739 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:21:04.073745 | orchestrator | 2026-02-03 04:21:04.073752 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-03 04:21:04.073763 | orchestrator | Tuesday 03 February 2026 04:21:03 +0000 (0:00:06.514) 0:02:48.951 ****** 2026-02-03 04:21:04.073772 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:21:04.073779 | orchestrator | 2026-02-03 04:21:04.073786 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:21:04.073794 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-03 04:21:04.073803 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 04:21:04.073809 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 04:21:04.073816 | orchestrator | 2026-02-03 04:21:04.073823 | orchestrator | 2026-02-03 04:21:04.073831 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:21:04.073838 | orchestrator | Tuesday 03 February 2026 04:21:04 +0000 (0:00:00.298) 0:02:49.249 ****** 2026-02-03 04:21:04.073904 | orchestrator | =============================================================================== 2026-02-03 04:21:04.073912 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 29.93s 2026-02-03 04:21:04.073919 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.01s 2026-02-03 04:21:04.073934 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.53s 2026-02-03 04:21:04.073942 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.34s 2026-02-03 04:21:04.073949 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.36s 2026-02-03 04:21:04.073956 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.34s 2026-02-03 04:21:04.073963 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.67s 2026-02-03 04:21:04.073971 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.51s 2026-02-03 04:21:04.073978 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.22s 2026-02-03 04:21:04.073986 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.19s 2026-02-03 04:21:04.073993 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.00s 2026-02-03 04:21:04.074000 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.55s 2026-02-03 04:21:04.074007 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.48s 2026-02-03 04:21:04.074059 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.30s 2026-02-03 04:21:04.074078 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.27s 2026-02-03 04:21:04.490191 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.99s 2026-02-03 04:21:04.490371 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.82s 2026-02-03 04:21:04.490385 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.26s 2026-02-03 04:21:04.490396 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.13s 2026-02-03 04:21:04.490406 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.09s 2026-02-03 04:21:07.105894 | orchestrator | 2026-02-03 04:21:07 | INFO  | Task 6f0bfeee-c963-4a12-a063-164d576558c6 (barbican) was prepared for execution. 2026-02-03 04:21:07.105971 | orchestrator | 2026-02-03 04:21:07 | INFO  | It takes a moment until task 6f0bfeee-c963-4a12-a063-164d576558c6 (barbican) has been started and output is visible here. 2026-02-03 04:21:51.427466 | orchestrator | 2026-02-03 04:21:51.427582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:21:51.427604 | orchestrator | 2026-02-03 04:21:51.427614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:21:51.427623 | orchestrator | Tuesday 03 February 2026 04:21:11 +0000 (0:00:00.271) 0:00:00.271 ****** 2026-02-03 04:21:51.427632 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:21:51.427641 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:21:51.427650 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:21:51.427658 | orchestrator | 2026-02-03 04:21:51.427666 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:21:51.427675 | orchestrator | Tuesday 03 February 2026 04:21:12 +0000 (0:00:00.370) 0:00:00.642 ****** 2026-02-03 04:21:51.427699 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-03 04:21:51.427708 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-03 04:21:51.427716 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-03 04:21:51.427725 | orchestrator | 2026-02-03 04:21:51.427733 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-03 04:21:51.427741 | orchestrator | 2026-02-03 04:21:51.427749 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-03 04:21:51.427757 | orchestrator | Tuesday 03 February 2026 04:21:12 +0000 (0:00:00.470) 0:00:01.112 ****** 2026-02-03 04:21:51.427767 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:21:51.427775 | orchestrator | 2026-02-03 04:21:51.427783 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-03 04:21:51.427792 | orchestrator | Tuesday 03 February 2026 04:21:13 +0000 (0:00:00.632) 0:00:01.745 ****** 2026-02-03 04:21:51.427800 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-03 04:21:51.427808 | orchestrator | 2026-02-03 04:21:51.427816 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-03 04:21:51.427824 | orchestrator | Tuesday 03 February 2026 04:21:16 +0000 (0:00:03.523) 0:00:05.268 ****** 2026-02-03 04:21:51.427832 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-03 04:21:51.427841 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-03 04:21:51.427849 | orchestrator | 2026-02-03 04:21:51.427857 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-03 04:21:51.427865 | orchestrator | Tuesday 03 February 2026 04:21:23 +0000 (0:00:06.363) 0:00:11.632 ****** 2026-02-03 04:21:51.427873 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:21:51.427881 | orchestrator | 2026-02-03 04:21:51.427889 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-03 04:21:51.427897 | orchestrator | Tuesday 03 February 2026 04:21:26 +0000 (0:00:03.372) 0:00:15.004 ****** 2026-02-03 04:21:51.427905 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:21:51.427913 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-03 04:21:51.427921 | orchestrator | 2026-02-03 04:21:51.427941 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-03 04:21:51.427950 | orchestrator | Tuesday 03 February 2026 04:21:30 +0000 (0:00:04.114) 0:00:19.119 ****** 2026-02-03 04:21:51.427958 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:21:51.427989 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-03 04:21:51.427998 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-03 04:21:51.428006 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-03 04:21:51.428014 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-03 04:21:51.428025 | orchestrator | 2026-02-03 04:21:51.428038 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-03 04:21:51.428077 | orchestrator | Tuesday 03 February 2026 04:21:45 +0000 (0:00:15.412) 0:00:34.532 ****** 2026-02-03 04:21:51.428091 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-03 04:21:51.428105 | orchestrator | 2026-02-03 04:21:51.428119 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-03 04:21:51.428132 | orchestrator | Tuesday 03 February 2026 04:21:49 +0000 (0:00:03.815) 0:00:38.347 ****** 2026-02-03 04:21:51.428144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:21:51.428172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:21:51.428181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:21:51.428195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:51.428215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:51.428247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:51.428266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:57.628852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:57.628954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:57.628969 | orchestrator | 2026-02-03 04:21:57.628983 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-03 04:21:57.628994 | orchestrator | Tuesday 03 February 2026 04:21:51 +0000 (0:00:01.658) 0:00:40.005 ****** 2026-02-03 04:21:57.629005 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-03 04:21:57.629014 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-03 04:21:57.629024 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-03 04:21:57.629034 | orchestrator | 2026-02-03 04:21:57.629044 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-03 04:21:57.629054 | orchestrator | Tuesday 03 February 2026 04:21:52 +0000 (0:00:01.181) 0:00:41.187 ****** 2026-02-03 04:21:57.629085 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:21:57.629097 | orchestrator | 2026-02-03 04:21:57.629107 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-03 04:21:57.629117 | orchestrator | Tuesday 03 February 2026 04:21:52 +0000 (0:00:00.340) 0:00:41.528 ****** 2026-02-03 04:21:57.629126 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:21:57.629136 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:21:57.629145 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:21:57.629155 | orchestrator | 2026-02-03 04:21:57.629165 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-03 04:21:57.629189 | orchestrator | Tuesday 03 February 2026 04:21:53 +0000 (0:00:00.336) 0:00:41.865 ****** 2026-02-03 04:21:57.629199 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:21:57.629210 | orchestrator | 2026-02-03 04:21:57.629220 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-03 04:21:57.629278 | orchestrator | Tuesday 03 February 2026 04:21:53 +0000 (0:00:00.594) 0:00:42.459 ****** 2026-02-03 04:21:57.629290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:21:57.629317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:21:57.629329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:21:57.629348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:57.629364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:57.629375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:57.629387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:57.629408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:59.102664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:21:59.102778 | orchestrator | 2026-02-03 04:21:59.102803 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-03 04:21:59.102850 | orchestrator | Tuesday 03 February 2026 04:21:57 +0000 (0:00:03.747) 0:00:46.207 ****** 2026-02-03 04:21:59.102869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 04:21:59.102902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:21:59.102920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:21:59.102936 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:21:59.102952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 04:21:59.102991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:21:59.103022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:21:59.103045 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:21:59.103069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 04:21:59.103085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:21:59.103100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:21:59.103115 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:21:59.103131 | orchestrator | 2026-02-03 04:21:59.103147 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-03 04:21:59.103163 | orchestrator | Tuesday 03 February 2026 04:21:58 +0000 (0:00:00.609) 0:00:46.816 ****** 2026-02-03 04:21:59.103193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 04:22:02.584468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:22:02.584557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:22:02.584570 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:22:02.584595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 04:22:02.584604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:22:02.584612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:22:02.584620 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:22:02.584643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 04:22:02.584670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:22:02.584682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:22:02.584690 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:22:02.584697 | orchestrator | 2026-02-03 04:22:02.584706 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-03 04:22:02.584715 | orchestrator | Tuesday 03 February 2026 04:21:59 +0000 (0:00:00.876) 0:00:47.693 ****** 2026-02-03 04:22:02.584723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:22:02.584732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:22:02.584752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:22:12.386519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:12.386678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:12.386708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:12.386729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:12.386783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:12.386805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:12.386825 | orchestrator | 2026-02-03 04:22:12.386848 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-03 04:22:12.386868 | orchestrator | Tuesday 03 February 2026 04:22:02 +0000 (0:00:03.475) 0:00:51.169 ****** 2026-02-03 04:22:12.386887 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:22:12.386905 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:22:12.386924 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:22:12.386942 | orchestrator | 2026-02-03 04:22:12.386987 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-03 04:22:12.387006 | orchestrator | Tuesday 03 February 2026 04:22:04 +0000 (0:00:01.603) 0:00:52.772 ****** 2026-02-03 04:22:12.387023 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:22:12.387042 | orchestrator | 2026-02-03 04:22:12.387062 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-03 04:22:12.387082 | orchestrator | Tuesday 03 February 2026 04:22:05 +0000 (0:00:00.966) 0:00:53.738 ****** 2026-02-03 04:22:12.387100 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:22:12.387119 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:22:12.387137 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:22:12.387156 | orchestrator | 2026-02-03 04:22:12.387176 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-03 04:22:12.387195 | orchestrator | Tuesday 03 February 2026 04:22:05 +0000 (0:00:00.637) 0:00:54.376 ****** 2026-02-03 04:22:12.387288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:22:12.387316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:22:12.387350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:22:12.387384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:13.318474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:13.318562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:13.318570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:13.318588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:13.318592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:13.318597 | orchestrator | 2026-02-03 04:22:13.318602 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-03 04:22:13.318607 | orchestrator | Tuesday 03 February 2026 04:22:12 +0000 (0:00:06.594) 0:01:00.970 ****** 2026-02-03 04:22:13.318621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 04:22:13.318628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:22:13.318634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:22:13.318638 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:22:13.318649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 04:22:13.318653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:22:13.318657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:22:13.318661 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:22:13.318670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-03 04:22:15.715123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:22:15.715180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:22:15.715213 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:22:15.715225 | orchestrator | 2026-02-03 04:22:15.715267 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-03 04:22:15.715279 | orchestrator | Tuesday 03 February 2026 04:22:13 +0000 (0:00:00.930) 0:01:01.901 ****** 2026-02-03 04:22:15.715289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:22:15.715301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:22:15.715333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-03 04:22:15.715346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:15.715364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:15.715374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:15.715384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:15.715394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:15.715403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:22:15.715414 | orchestrator | 2026-02-03 04:22:15.715423 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-03 04:22:15.715439 | orchestrator | Tuesday 03 February 2026 04:22:15 +0000 (0:00:02.396) 0:01:04.298 ****** 2026-02-03 04:23:05.035849 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:23:05.035957 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:23:05.035968 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:23:05.035995 | orchestrator | 2026-02-03 04:23:05.036004 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-03 04:23:05.036012 | orchestrator | Tuesday 03 February 2026 04:22:16 +0000 (0:00:00.324) 0:01:04.623 ****** 2026-02-03 04:23:05.036019 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:23:05.036025 | orchestrator | 2026-02-03 04:23:05.036032 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-03 04:23:05.036039 | orchestrator | Tuesday 03 February 2026 04:22:18 +0000 (0:00:02.115) 0:01:06.738 ****** 2026-02-03 04:23:05.036046 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:23:05.036052 | orchestrator | 2026-02-03 04:23:05.036059 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-03 04:23:05.036066 | orchestrator | Tuesday 03 February 2026 04:22:20 +0000 (0:00:02.204) 0:01:08.943 ****** 2026-02-03 04:23:05.036072 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:23:05.036079 | orchestrator | 2026-02-03 04:23:05.036086 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-03 04:23:05.036092 | orchestrator | Tuesday 03 February 2026 04:22:32 +0000 (0:00:12.510) 0:01:21.453 ****** 2026-02-03 04:23:05.036099 | orchestrator | 2026-02-03 04:23:05.036106 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-03 04:23:05.036113 | orchestrator | Tuesday 03 February 2026 04:22:33 +0000 (0:00:00.307) 0:01:21.760 ****** 2026-02-03 04:23:05.036120 | orchestrator | 2026-02-03 04:23:05.036126 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-03 04:23:05.036133 | orchestrator | Tuesday 03 February 2026 04:22:33 +0000 (0:00:00.082) 0:01:21.843 ****** 2026-02-03 04:23:05.036140 | orchestrator | 2026-02-03 04:23:05.036146 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-03 04:23:05.036153 | orchestrator | Tuesday 03 February 2026 04:22:33 +0000 (0:00:00.074) 0:01:21.917 ****** 2026-02-03 04:23:05.036159 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:23:05.036166 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:23:05.036173 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:23:05.036180 | orchestrator | 2026-02-03 04:23:05.036186 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-03 04:23:05.036193 | orchestrator | Tuesday 03 February 2026 04:22:44 +0000 (0:00:11.302) 0:01:33.219 ****** 2026-02-03 04:23:05.036200 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:23:05.036207 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:23:05.036214 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:23:05.036220 | orchestrator | 2026-02-03 04:23:05.036227 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-03 04:23:05.036234 | orchestrator | Tuesday 03 February 2026 04:22:54 +0000 (0:00:09.622) 0:01:42.842 ****** 2026-02-03 04:23:05.036241 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:23:05.036311 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:23:05.036321 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:23:05.036328 | orchestrator | 2026-02-03 04:23:05.036335 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:23:05.036343 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 04:23:05.036351 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 04:23:05.036358 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 04:23:05.036365 | orchestrator | 2026-02-03 04:23:05.036372 | orchestrator | 2026-02-03 04:23:05.036379 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:23:05.036385 | orchestrator | Tuesday 03 February 2026 04:23:04 +0000 (0:00:10.408) 0:01:53.250 ****** 2026-02-03 04:23:05.036399 | orchestrator | =============================================================================== 2026-02-03 04:23:05.036406 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.41s 2026-02-03 04:23:05.036413 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.51s 2026-02-03 04:23:05.036419 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.30s 2026-02-03 04:23:05.036426 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.41s 2026-02-03 04:23:05.036433 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.62s 2026-02-03 04:23:05.036440 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.59s 2026-02-03 04:23:05.036446 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.36s 2026-02-03 04:23:05.036453 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.11s 2026-02-03 04:23:05.036460 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.82s 2026-02-03 04:23:05.036467 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.75s 2026-02-03 04:23:05.036473 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.52s 2026-02-03 04:23:05.036480 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.48s 2026-02-03 04:23:05.036487 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.37s 2026-02-03 04:23:05.036494 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.40s 2026-02-03 04:23:05.036501 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.20s 2026-02-03 04:23:05.036521 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.12s 2026-02-03 04:23:05.036532 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.66s 2026-02-03 04:23:05.036539 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.60s 2026-02-03 04:23:05.036546 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.18s 2026-02-03 04:23:05.036553 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.97s 2026-02-03 04:23:07.568950 | orchestrator | 2026-02-03 04:23:07 | INFO  | Task f0ff3995-002c-4f72-a58f-bd7b4e571171 (designate) was prepared for execution. 2026-02-03 04:23:07.569053 | orchestrator | 2026-02-03 04:23:07 | INFO  | It takes a moment until task f0ff3995-002c-4f72-a58f-bd7b4e571171 (designate) has been started and output is visible here. 2026-02-03 04:23:39.832661 | orchestrator | 2026-02-03 04:23:39.832794 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:23:39.832820 | orchestrator | 2026-02-03 04:23:39.832840 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:23:39.832858 | orchestrator | Tuesday 03 February 2026 04:23:11 +0000 (0:00:00.285) 0:00:00.285 ****** 2026-02-03 04:23:39.832876 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:23:39.832896 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:23:39.832914 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:23:39.832932 | orchestrator | 2026-02-03 04:23:39.832950 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:23:39.832969 | orchestrator | Tuesday 03 February 2026 04:23:12 +0000 (0:00:00.341) 0:00:00.627 ****** 2026-02-03 04:23:39.832988 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-03 04:23:39.833007 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-03 04:23:39.833025 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-03 04:23:39.833042 | orchestrator | 2026-02-03 04:23:39.833061 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-03 04:23:39.833081 | orchestrator | 2026-02-03 04:23:39.833101 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-03 04:23:39.833121 | orchestrator | Tuesday 03 February 2026 04:23:12 +0000 (0:00:00.541) 0:00:01.169 ****** 2026-02-03 04:23:39.833181 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:23:39.833205 | orchestrator | 2026-02-03 04:23:39.833228 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-03 04:23:39.833250 | orchestrator | Tuesday 03 February 2026 04:23:13 +0000 (0:00:00.644) 0:00:01.813 ****** 2026-02-03 04:23:39.833296 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-03 04:23:39.833313 | orchestrator | 2026-02-03 04:23:39.833332 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-03 04:23:39.833353 | orchestrator | Tuesday 03 February 2026 04:23:16 +0000 (0:00:03.486) 0:00:05.299 ****** 2026-02-03 04:23:39.833376 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-03 04:23:39.833397 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-03 04:23:39.833417 | orchestrator | 2026-02-03 04:23:39.833438 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-03 04:23:39.833459 | orchestrator | Tuesday 03 February 2026 04:23:23 +0000 (0:00:06.465) 0:00:11.765 ****** 2026-02-03 04:23:39.833479 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:23:39.833521 | orchestrator | 2026-02-03 04:23:39.833544 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-03 04:23:39.833582 | orchestrator | Tuesday 03 February 2026 04:23:26 +0000 (0:00:03.192) 0:00:14.958 ****** 2026-02-03 04:23:39.833602 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:23:39.833620 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-03 04:23:39.833637 | orchestrator | 2026-02-03 04:23:39.833654 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-03 04:23:39.833671 | orchestrator | Tuesday 03 February 2026 04:23:30 +0000 (0:00:04.020) 0:00:18.978 ****** 2026-02-03 04:23:39.833690 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:23:39.833711 | orchestrator | 2026-02-03 04:23:39.833727 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-03 04:23:39.833743 | orchestrator | Tuesday 03 February 2026 04:23:33 +0000 (0:00:03.133) 0:00:22.112 ****** 2026-02-03 04:23:39.833759 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-03 04:23:39.833774 | orchestrator | 2026-02-03 04:23:39.833790 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-03 04:23:39.833806 | orchestrator | Tuesday 03 February 2026 04:23:37 +0000 (0:00:03.890) 0:00:26.003 ****** 2026-02-03 04:23:39.833850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:23:39.833903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:23:39.833939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:23:39.833958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:23:39.833977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:23:39.833994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:23:39.834129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:39.834185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:46.415534 | orchestrator | 2026-02-03 04:23:46.415548 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-03 04:23:46.415561 | orchestrator | Tuesday 03 February 2026 04:23:40 +0000 (0:00:02.876) 0:00:28.879 ****** 2026-02-03 04:23:46.415571 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:23:46.415583 | orchestrator | 2026-02-03 04:23:46.415593 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-03 04:23:46.415604 | orchestrator | Tuesday 03 February 2026 04:23:40 +0000 (0:00:00.143) 0:00:29.022 ****** 2026-02-03 04:23:46.415616 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:23:46.415629 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:23:46.415640 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:23:46.415652 | orchestrator | 2026-02-03 04:23:46.415664 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-03 04:23:46.415676 | orchestrator | Tuesday 03 February 2026 04:23:41 +0000 (0:00:00.533) 0:00:29.556 ****** 2026-02-03 04:23:46.415691 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:23:46.415713 | orchestrator | 2026-02-03 04:23:46.415727 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-03 04:23:46.415740 | orchestrator | Tuesday 03 February 2026 04:23:41 +0000 (0:00:00.569) 0:00:30.125 ****** 2026-02-03 04:23:46.415760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:23:46.415786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:23:48.235710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:23:48.235829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:23:48.235848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:23:48.235910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:23:48.235931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:48.235957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:48.235967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:48.235976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:48.235987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:48.236003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:48.236017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:48.236027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:48.236043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:49.137752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:49.137867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:49.137894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:49.137944 | orchestrator | 2026-02-03 04:23:49.137965 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-03 04:23:49.137986 | orchestrator | Tuesday 03 February 2026 04:23:48 +0000 (0:00:06.435) 0:00:36.561 ****** 2026-02-03 04:23:49.138097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:23:49.138117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 04:23:49.138149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.138162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.138175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.138196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.138208 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:23:49.138226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:23:49.138239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 04:23:49.138250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.138332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.957178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.957419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.957447 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:23:49.957478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:23:49.957493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 04:23:49.957505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.957517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.957550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.957571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.957583 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:23:49.957595 | orchestrator | 2026-02-03 04:23:49.957608 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-03 04:23:49.957621 | orchestrator | Tuesday 03 February 2026 04:23:49 +0000 (0:00:01.017) 0:00:37.579 ****** 2026-02-03 04:23:49.957638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:23:49.957651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 04:23:49.957662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:23:49.957681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:23:50.299341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:23:50.299431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:23:50.299440 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:23:50.299465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:23:50.299474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 04:23:50.299483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:23:50.299489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:23:50.299529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:23:50.299536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:23:50.299542 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:23:50.299552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:23:50.299558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 04:23:50.299563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:23:50.299569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:23:50.299586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:23:54.461521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:23:54.461643 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:23:54.461663 | orchestrator | 2026-02-03 04:23:54.461679 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-03 04:23:54.461694 | orchestrator | Tuesday 03 February 2026 04:23:50 +0000 (0:00:01.047) 0:00:38.626 ****** 2026-02-03 04:23:54.461727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:23:54.461744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:23:54.461759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:23:54.461816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:23:54.461834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:23:54.461854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:23:54.461870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:54.461885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:54.461899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:54.461923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:23:54.461949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:06.361394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:06.361515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:06.361527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:06.361533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:06.361553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:06.361560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:06.361577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:06.361583 | orchestrator | 2026-02-03 04:24:06.361590 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-03 04:24:06.361597 | orchestrator | Tuesday 03 February 2026 04:23:56 +0000 (0:00:05.967) 0:00:44.594 ****** 2026-02-03 04:24:06.361607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:24:06.361614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:24:06.361624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:24:06.361630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:06.361643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:14.551879 | orchestrator | 2026-02-03 04:24:14.551888 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-03 04:24:14.551896 | orchestrator | Tuesday 03 February 2026 04:24:10 +0000 (0:00:14.645) 0:00:59.239 ****** 2026-02-03 04:24:14.551907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-03 04:24:19.190766 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-03 04:24:19.190843 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-03 04:24:19.190852 | orchestrator | 2026-02-03 04:24:19.190859 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-03 04:24:19.190866 | orchestrator | Tuesday 03 February 2026 04:24:14 +0000 (0:00:03.638) 0:01:02.878 ****** 2026-02-03 04:24:19.190871 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-03 04:24:19.190877 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-03 04:24:19.190882 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-03 04:24:19.190887 | orchestrator | 2026-02-03 04:24:19.190907 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-03 04:24:19.190912 | orchestrator | Tuesday 03 February 2026 04:24:17 +0000 (0:00:02.748) 0:01:05.627 ****** 2026-02-03 04:24:19.190938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:24:19.190947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:24:19.190952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:24:19.190969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:19.190977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:24:19.190986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:24:19.190998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:24:19.191004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:19.191009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:24:19.191015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:24:19.191026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:24:22.087261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:22.087394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:24:22.087403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:24:22.087407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:24:22.087412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:22.087417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:22.087432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:22.087441 | orchestrator | 2026-02-03 04:24:22.087446 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-03 04:24:22.087452 | orchestrator | Tuesday 03 February 2026 04:24:20 +0000 (0:00:02.939) 0:01:08.567 ****** 2026-02-03 04:24:22.087461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:24:22.087467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:24:22.087471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:24:22.087475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:22.087481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:24:23.078504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:24:23.078627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:24:23.078646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:23.078659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:24:23.078712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:24:23.078725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:24:23.078785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:23.078799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:24:23.078810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:24:23.078819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:24:23.078830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:23.078842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:23.078861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:23.078872 | orchestrator | 2026-02-03 04:24:23.078885 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-03 04:24:23.078903 | orchestrator | Tuesday 03 February 2026 04:24:23 +0000 (0:00:02.833) 0:01:11.400 ****** 2026-02-03 04:24:24.109635 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:24:24.109708 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:24:24.109715 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:24:24.109722 | orchestrator | 2026-02-03 04:24:24.109740 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-03 04:24:24.109747 | orchestrator | Tuesday 03 February 2026 04:24:23 +0000 (0:00:00.357) 0:01:11.758 ****** 2026-02-03 04:24:24.109755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:24:24.109764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 04:24:24.109772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:24:24.109778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:24:24.109798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:24:24.109815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:24:24.109825 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:24:24.109830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:24:24.109836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 04:24:24.109841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:24:24.109847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:24:24.109856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:24:24.109865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:24:27.502916 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:24:27.503032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-03 04:24:27.503051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 04:24:27.503063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 04:24:27.503076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 04:24:27.503110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 04:24:27.503123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:24:27.503131 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:24:27.503138 | orchestrator | 2026-02-03 04:24:27.503158 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-03 04:24:27.503170 | orchestrator | Tuesday 03 February 2026 04:24:24 +0000 (0:00:00.803) 0:01:12.561 ****** 2026-02-03 04:24:27.503177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:24:27.503184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:24:27.503191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-03 04:24:27.503203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:27.503214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:24:29.357782 | orchestrator | 2026-02-03 04:24:29.357792 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-03 04:24:29.357802 | orchestrator | Tuesday 03 February 2026 04:24:28 +0000 (0:00:04.548) 0:01:17.110 ****** 2026-02-03 04:24:29.357810 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:24:29.357824 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:25:50.733913 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:25:50.734124 | orchestrator | 2026-02-03 04:25:50.734179 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-03 04:25:50.734203 | orchestrator | Tuesday 03 February 2026 04:24:29 +0000 (0:00:00.576) 0:01:17.686 ****** 2026-02-03 04:25:50.734225 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-03 04:25:50.734245 | orchestrator | 2026-02-03 04:25:50.734264 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-03 04:25:50.734284 | orchestrator | Tuesday 03 February 2026 04:24:31 +0000 (0:00:02.142) 0:01:19.829 ****** 2026-02-03 04:25:50.734372 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-03 04:25:50.734396 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-03 04:25:50.734415 | orchestrator | 2026-02-03 04:25:50.734435 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-03 04:25:50.734455 | orchestrator | Tuesday 03 February 2026 04:24:33 +0000 (0:00:02.408) 0:01:22.238 ****** 2026-02-03 04:25:50.734474 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:25:50.734494 | orchestrator | 2026-02-03 04:25:50.734512 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-03 04:25:50.734531 | orchestrator | Tuesday 03 February 2026 04:24:49 +0000 (0:00:15.964) 0:01:38.202 ****** 2026-02-03 04:25:50.734550 | orchestrator | 2026-02-03 04:25:50.734571 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-03 04:25:50.734622 | orchestrator | Tuesday 03 February 2026 04:24:49 +0000 (0:00:00.072) 0:01:38.275 ****** 2026-02-03 04:25:50.734642 | orchestrator | 2026-02-03 04:25:50.734661 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-03 04:25:50.734680 | orchestrator | Tuesday 03 February 2026 04:24:50 +0000 (0:00:00.071) 0:01:38.346 ****** 2026-02-03 04:25:50.734700 | orchestrator | 2026-02-03 04:25:50.734720 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-03 04:25:50.734739 | orchestrator | Tuesday 03 February 2026 04:24:50 +0000 (0:00:00.085) 0:01:38.432 ****** 2026-02-03 04:25:50.734750 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:25:50.734761 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:25:50.734772 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:25:50.734783 | orchestrator | 2026-02-03 04:25:50.734794 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-03 04:25:50.734805 | orchestrator | Tuesday 03 February 2026 04:24:58 +0000 (0:00:08.813) 0:01:47.245 ****** 2026-02-03 04:25:50.734816 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:25:50.734827 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:25:50.734838 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:25:50.734849 | orchestrator | 2026-02-03 04:25:50.734860 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-03 04:25:50.734871 | orchestrator | Tuesday 03 February 2026 04:25:05 +0000 (0:00:06.105) 0:01:53.351 ****** 2026-02-03 04:25:50.734882 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:25:50.734893 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:25:50.734904 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:25:50.734914 | orchestrator | 2026-02-03 04:25:50.734925 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-03 04:25:50.734936 | orchestrator | Tuesday 03 February 2026 04:25:13 +0000 (0:00:08.811) 0:02:02.162 ****** 2026-02-03 04:25:50.734956 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:25:50.734975 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:25:50.734992 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:25:50.735009 | orchestrator | 2026-02-03 04:25:50.735034 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-03 04:25:50.735057 | orchestrator | Tuesday 03 February 2026 04:25:22 +0000 (0:00:08.951) 0:02:11.114 ****** 2026-02-03 04:25:50.735075 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:25:50.735095 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:25:50.735114 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:25:50.735132 | orchestrator | 2026-02-03 04:25:50.735152 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-03 04:25:50.735173 | orchestrator | Tuesday 03 February 2026 04:25:33 +0000 (0:00:11.183) 0:02:22.297 ****** 2026-02-03 04:25:50.735201 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:25:50.735218 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:25:50.735234 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:25:50.735251 | orchestrator | 2026-02-03 04:25:50.735268 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-03 04:25:50.735285 | orchestrator | Tuesday 03 February 2026 04:25:42 +0000 (0:00:08.950) 0:02:31.248 ****** 2026-02-03 04:25:50.735304 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:25:50.735349 | orchestrator | 2026-02-03 04:25:50.735367 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:25:50.735386 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 04:25:50.735405 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 04:25:50.735421 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 04:25:50.735440 | orchestrator | 2026-02-03 04:25:50.735472 | orchestrator | 2026-02-03 04:25:50.735489 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:25:50.735505 | orchestrator | Tuesday 03 February 2026 04:25:50 +0000 (0:00:07.412) 0:02:38.661 ****** 2026-02-03 04:25:50.735522 | orchestrator | =============================================================================== 2026-02-03 04:25:50.735539 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.96s 2026-02-03 04:25:50.735555 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.65s 2026-02-03 04:25:50.735599 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.18s 2026-02-03 04:25:50.735632 | orchestrator | designate : Restart designate-producer container ------------------------ 8.95s 2026-02-03 04:25:50.735652 | orchestrator | designate : Restart designate-worker container -------------------------- 8.95s 2026-02-03 04:25:50.735670 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.81s 2026-02-03 04:25:50.735688 | orchestrator | designate : Restart designate-central container ------------------------- 8.81s 2026-02-03 04:25:50.735707 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.41s 2026-02-03 04:25:50.735718 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.47s 2026-02-03 04:25:50.735729 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.44s 2026-02-03 04:25:50.735740 | orchestrator | designate : Restart designate-api container ----------------------------- 6.11s 2026-02-03 04:25:50.735751 | orchestrator | designate : Copying over config.json files for services ----------------- 5.97s 2026-02-03 04:25:50.735762 | orchestrator | designate : Check designate containers ---------------------------------- 4.55s 2026-02-03 04:25:50.735772 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.02s 2026-02-03 04:25:50.735783 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.89s 2026-02-03 04:25:50.735794 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.64s 2026-02-03 04:25:50.735805 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.49s 2026-02-03 04:25:50.735816 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.19s 2026-02-03 04:25:50.735827 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.13s 2026-02-03 04:25:50.735837 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.94s 2026-02-03 04:25:53.252155 | orchestrator | 2026-02-03 04:25:53 | INFO  | Task 2488ed1b-882c-4f40-a6a2-ed03a2f030ce (octavia) was prepared for execution. 2026-02-03 04:25:53.252236 | orchestrator | 2026-02-03 04:25:53 | INFO  | It takes a moment until task 2488ed1b-882c-4f40-a6a2-ed03a2f030ce (octavia) has been started and output is visible here. 2026-02-03 04:27:59.410284 | orchestrator | 2026-02-03 04:27:59.410407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:27:59.410419 | orchestrator | 2026-02-03 04:27:59.410427 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:27:59.410434 | orchestrator | Tuesday 03 February 2026 04:25:57 +0000 (0:00:00.268) 0:00:00.268 ****** 2026-02-03 04:27:59.410441 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:27:59.410449 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:27:59.410456 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:27:59.410462 | orchestrator | 2026-02-03 04:27:59.410469 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:27:59.410476 | orchestrator | Tuesday 03 February 2026 04:25:57 +0000 (0:00:00.331) 0:00:00.600 ****** 2026-02-03 04:27:59.410482 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-03 04:27:59.410489 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-03 04:27:59.410496 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-03 04:27:59.410503 | orchestrator | 2026-02-03 04:27:59.410510 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-03 04:27:59.410539 | orchestrator | 2026-02-03 04:27:59.410545 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-03 04:27:59.410552 | orchestrator | Tuesday 03 February 2026 04:25:58 +0000 (0:00:00.473) 0:00:01.073 ****** 2026-02-03 04:27:59.410559 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:27:59.410566 | orchestrator | 2026-02-03 04:27:59.410573 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-03 04:27:59.410579 | orchestrator | Tuesday 03 February 2026 04:25:59 +0000 (0:00:00.595) 0:00:01.669 ****** 2026-02-03 04:27:59.410586 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-03 04:27:59.410592 | orchestrator | 2026-02-03 04:27:59.410599 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-03 04:27:59.410605 | orchestrator | Tuesday 03 February 2026 04:26:02 +0000 (0:00:03.502) 0:00:05.172 ****** 2026-02-03 04:27:59.410612 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-03 04:27:59.410618 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-03 04:27:59.410625 | orchestrator | 2026-02-03 04:27:59.410631 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-03 04:27:59.410638 | orchestrator | Tuesday 03 February 2026 04:26:08 +0000 (0:00:06.440) 0:00:11.612 ****** 2026-02-03 04:27:59.410644 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:27:59.410651 | orchestrator | 2026-02-03 04:27:59.410657 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-03 04:27:59.410664 | orchestrator | Tuesday 03 February 2026 04:26:12 +0000 (0:00:03.210) 0:00:14.823 ****** 2026-02-03 04:27:59.410670 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:27:59.410677 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-03 04:27:59.410683 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-03 04:27:59.410690 | orchestrator | 2026-02-03 04:27:59.410700 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-03 04:27:59.410710 | orchestrator | Tuesday 03 February 2026 04:26:20 +0000 (0:00:08.122) 0:00:22.945 ****** 2026-02-03 04:27:59.410720 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:27:59.410733 | orchestrator | 2026-02-03 04:27:59.410761 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-03 04:27:59.410772 | orchestrator | Tuesday 03 February 2026 04:26:23 +0000 (0:00:03.226) 0:00:26.172 ****** 2026-02-03 04:27:59.410782 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-03 04:27:59.410792 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-03 04:27:59.410802 | orchestrator | 2026-02-03 04:27:59.410812 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-03 04:27:59.410823 | orchestrator | Tuesday 03 February 2026 04:26:30 +0000 (0:00:07.243) 0:00:33.415 ****** 2026-02-03 04:27:59.410833 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-03 04:27:59.410843 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-03 04:27:59.410853 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-03 04:27:59.410862 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-03 04:27:59.410871 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-03 04:27:59.410879 | orchestrator | 2026-02-03 04:27:59.410888 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-03 04:27:59.410899 | orchestrator | Tuesday 03 February 2026 04:26:46 +0000 (0:00:15.545) 0:00:48.961 ****** 2026-02-03 04:27:59.410909 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:27:59.410928 | orchestrator | 2026-02-03 04:27:59.410939 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-03 04:27:59.410950 | orchestrator | Tuesday 03 February 2026 04:26:47 +0000 (0:00:00.949) 0:00:49.910 ****** 2026-02-03 04:27:59.410961 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.410972 | orchestrator | 2026-02-03 04:27:59.410983 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-03 04:27:59.410994 | orchestrator | Tuesday 03 February 2026 04:26:52 +0000 (0:00:04.854) 0:00:54.765 ****** 2026-02-03 04:27:59.411004 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.411014 | orchestrator | 2026-02-03 04:27:59.411025 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-03 04:27:59.411073 | orchestrator | Tuesday 03 February 2026 04:26:56 +0000 (0:00:04.508) 0:00:59.273 ****** 2026-02-03 04:27:59.411081 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:27:59.411088 | orchestrator | 2026-02-03 04:27:59.411094 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-03 04:27:59.411101 | orchestrator | Tuesday 03 February 2026 04:26:59 +0000 (0:00:03.150) 0:01:02.423 ****** 2026-02-03 04:27:59.411107 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-03 04:27:59.411114 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-03 04:27:59.411120 | orchestrator | 2026-02-03 04:27:59.411126 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-03 04:27:59.411132 | orchestrator | Tuesday 03 February 2026 04:27:09 +0000 (0:00:09.731) 0:01:12.155 ****** 2026-02-03 04:27:59.411139 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-03 04:27:59.411145 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-03 04:27:59.411153 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-03 04:27:59.411161 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-03 04:27:59.411167 | orchestrator | 2026-02-03 04:27:59.411174 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-03 04:27:59.411183 | orchestrator | Tuesday 03 February 2026 04:27:26 +0000 (0:00:16.504) 0:01:28.660 ****** 2026-02-03 04:27:59.411189 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.411196 | orchestrator | 2026-02-03 04:27:59.411202 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-03 04:27:59.411208 | orchestrator | Tuesday 03 February 2026 04:27:30 +0000 (0:00:04.614) 0:01:33.275 ****** 2026-02-03 04:27:59.411215 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.411221 | orchestrator | 2026-02-03 04:27:59.411227 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-03 04:27:59.411234 | orchestrator | Tuesday 03 February 2026 04:27:36 +0000 (0:00:05.578) 0:01:38.854 ****** 2026-02-03 04:27:59.411240 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:27:59.411246 | orchestrator | 2026-02-03 04:27:59.411252 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-03 04:27:59.411259 | orchestrator | Tuesday 03 February 2026 04:27:36 +0000 (0:00:00.231) 0:01:39.086 ****** 2026-02-03 04:27:59.411265 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:27:59.411272 | orchestrator | 2026-02-03 04:27:59.411278 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-03 04:27:59.411287 | orchestrator | Tuesday 03 February 2026 04:27:40 +0000 (0:00:04.369) 0:01:43.455 ****** 2026-02-03 04:27:59.411297 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:27:59.411305 | orchestrator | 2026-02-03 04:27:59.411312 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-03 04:27:59.411324 | orchestrator | Tuesday 03 February 2026 04:27:41 +0000 (0:00:00.944) 0:01:44.400 ****** 2026-02-03 04:27:59.411331 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:27:59.411337 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.411343 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:27:59.411350 | orchestrator | 2026-02-03 04:27:59.411387 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-03 04:27:59.411398 | orchestrator | Tuesday 03 February 2026 04:27:47 +0000 (0:00:05.253) 0:01:49.653 ****** 2026-02-03 04:27:59.411405 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.411411 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:27:59.411417 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:27:59.411423 | orchestrator | 2026-02-03 04:27:59.411430 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-03 04:27:59.411436 | orchestrator | Tuesday 03 February 2026 04:27:51 +0000 (0:00:04.572) 0:01:54.226 ****** 2026-02-03 04:27:59.411442 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.411448 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:27:59.411455 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:27:59.411461 | orchestrator | 2026-02-03 04:27:59.411467 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-03 04:27:59.411474 | orchestrator | Tuesday 03 February 2026 04:27:52 +0000 (0:00:01.041) 0:01:55.267 ****** 2026-02-03 04:27:59.411480 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:27:59.411486 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:27:59.411492 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:27:59.411498 | orchestrator | 2026-02-03 04:27:59.411505 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-03 04:27:59.411511 | orchestrator | Tuesday 03 February 2026 04:27:54 +0000 (0:00:01.921) 0:01:57.189 ****** 2026-02-03 04:27:59.411517 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:27:59.411524 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.411530 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:27:59.411536 | orchestrator | 2026-02-03 04:27:59.411542 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-03 04:27:59.411548 | orchestrator | Tuesday 03 February 2026 04:27:55 +0000 (0:00:01.308) 0:01:58.498 ****** 2026-02-03 04:27:59.411555 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.411561 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:27:59.411567 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:27:59.411573 | orchestrator | 2026-02-03 04:27:59.411579 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-03 04:27:59.411586 | orchestrator | Tuesday 03 February 2026 04:27:57 +0000 (0:00:01.248) 0:01:59.746 ****** 2026-02-03 04:27:59.411592 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:27:59.411598 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:27:59.411609 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:27:59.411617 | orchestrator | 2026-02-03 04:27:59.411629 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-03 04:28:26.203663 | orchestrator | Tuesday 03 February 2026 04:27:59 +0000 (0:00:02.267) 0:02:02.014 ****** 2026-02-03 04:28:26.204820 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:28:26.204858 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:28:26.204871 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:28:26.204883 | orchestrator | 2026-02-03 04:28:26.204897 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-03 04:28:26.204909 | orchestrator | Tuesday 03 February 2026 04:28:01 +0000 (0:00:01.654) 0:02:03.668 ****** 2026-02-03 04:28:26.204920 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:28:26.204933 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:28:26.204944 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:28:26.204955 | orchestrator | 2026-02-03 04:28:26.204967 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-03 04:28:26.204979 | orchestrator | Tuesday 03 February 2026 04:28:01 +0000 (0:00:00.684) 0:02:04.353 ****** 2026-02-03 04:28:26.205015 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:28:26.205027 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:28:26.205038 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:28:26.205049 | orchestrator | 2026-02-03 04:28:26.205060 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-03 04:28:26.205072 | orchestrator | Tuesday 03 February 2026 04:28:04 +0000 (0:00:02.826) 0:02:07.180 ****** 2026-02-03 04:28:26.205084 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:28:26.205096 | orchestrator | 2026-02-03 04:28:26.205107 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-03 04:28:26.205118 | orchestrator | Tuesday 03 February 2026 04:28:05 +0000 (0:00:00.802) 0:02:07.982 ****** 2026-02-03 04:28:26.205129 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:28:26.205140 | orchestrator | 2026-02-03 04:28:26.205151 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-03 04:28:26.205163 | orchestrator | Tuesday 03 February 2026 04:28:09 +0000 (0:00:04.211) 0:02:12.193 ****** 2026-02-03 04:28:26.205174 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:28:26.205185 | orchestrator | 2026-02-03 04:28:26.205196 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-03 04:28:26.205208 | orchestrator | Tuesday 03 February 2026 04:28:12 +0000 (0:00:03.145) 0:02:15.339 ****** 2026-02-03 04:28:26.205219 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-03 04:28:26.205230 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-03 04:28:26.205241 | orchestrator | 2026-02-03 04:28:26.205253 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-03 04:28:26.205264 | orchestrator | Tuesday 03 February 2026 04:28:19 +0000 (0:00:06.835) 0:02:22.174 ****** 2026-02-03 04:28:26.205275 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:28:26.205286 | orchestrator | 2026-02-03 04:28:26.205297 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-03 04:28:26.205308 | orchestrator | Tuesday 03 February 2026 04:28:23 +0000 (0:00:04.001) 0:02:26.176 ****** 2026-02-03 04:28:26.205319 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:28:26.205330 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:28:26.205341 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:28:26.205352 | orchestrator | 2026-02-03 04:28:26.205387 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-03 04:28:26.205399 | orchestrator | Tuesday 03 February 2026 04:28:23 +0000 (0:00:00.325) 0:02:26.501 ****** 2026-02-03 04:28:26.205428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:26.205465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:26.205487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:26.205500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:28:26.205513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:28:26.205530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:28:26.205542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:26.205555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:26.205581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:27.668962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:27.669089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:27.669116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:27.669158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:28:27.669178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:28:27.669358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:28:27.669415 | orchestrator | 2026-02-03 04:28:27.669436 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-03 04:28:27.669455 | orchestrator | Tuesday 03 February 2026 04:28:26 +0000 (0:00:02.738) 0:02:29.239 ****** 2026-02-03 04:28:27.669471 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:28:27.669488 | orchestrator | 2026-02-03 04:28:27.669507 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-03 04:28:27.669524 | orchestrator | Tuesday 03 February 2026 04:28:26 +0000 (0:00:00.147) 0:02:29.386 ****** 2026-02-03 04:28:27.669541 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:28:27.669580 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:28:27.669594 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:28:27.669605 | orchestrator | 2026-02-03 04:28:27.669617 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-03 04:28:27.669626 | orchestrator | Tuesday 03 February 2026 04:28:27 +0000 (0:00:00.311) 0:02:29.697 ****** 2026-02-03 04:28:27.669638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 04:28:27.669650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 04:28:27.669671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 04:28:27.669682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 04:28:27.669701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:28:27.669712 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:28:27.669731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 04:28:32.641279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 04:28:32.641436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 04:28:32.641474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 04:28:32.641513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:28:32.641528 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:28:32.641543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 04:28:32.641556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 04:28:32.641589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 04:28:32.641602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 04:28:32.641619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:28:32.641640 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:28:32.641652 | orchestrator | 2026-02-03 04:28:32.641665 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-03 04:28:32.641709 | orchestrator | Tuesday 03 February 2026 04:28:27 +0000 (0:00:00.694) 0:02:30.392 ****** 2026-02-03 04:28:32.641722 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:28:32.641734 | orchestrator | 2026-02-03 04:28:32.641745 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-03 04:28:32.641756 | orchestrator | Tuesday 03 February 2026 04:28:28 +0000 (0:00:00.770) 0:02:31.162 ****** 2026-02-03 04:28:32.641771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:32.641787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:32.641810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:34.229927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:28:34.230135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:28:34.230166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:28:34.230180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:34.230195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:34.230207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:34.230243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:34.230263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:34.230287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:34.230301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:28:34.230314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:28:34.230327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:28:34.230341 | orchestrator | 2026-02-03 04:28:34.230355 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-03 04:28:34.230441 | orchestrator | Tuesday 03 February 2026 04:28:33 +0000 (0:00:05.027) 0:02:36.189 ****** 2026-02-03 04:28:34.230471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 04:28:34.337862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 04:28:34.337983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.338002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.338080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:28:34.338098 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:28:34.338113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 04:28:34.338126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 04:28:34.338178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.338197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.338209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:28:34.338221 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:28:34.338232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 04:28:34.338244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 04:28:34.338255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.338283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.926625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:28:34.926729 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:28:34.926751 | orchestrator | 2026-02-03 04:28:34.926773 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-03 04:28:34.926790 | orchestrator | Tuesday 03 February 2026 04:28:34 +0000 (0:00:00.760) 0:02:36.949 ****** 2026-02-03 04:28:34.926809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 04:28:34.926830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 04:28:34.926851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.926898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.926930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:28:34.926943 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:28:34.926962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 04:28:34.926974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 04:28:34.926985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.926997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 04:28:34.927018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:28:34.927029 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:28:34.927064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 04:28:39.664930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 04:28:39.665014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 04:28:39.665023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 04:28:39.665029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 04:28:39.665053 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:28:39.665060 | orchestrator | 2026-02-03 04:28:39.665066 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-03 04:28:39.665072 | orchestrator | Tuesday 03 February 2026 04:28:35 +0000 (0:00:01.138) 0:02:38.088 ****** 2026-02-03 04:28:39.665078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:39.665107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:39.665113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:39.665118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:28:39.665124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:28:39.665134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:28:39.665139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:39.665151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:56.351974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:56.352082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:56.352097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:56.352130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:28:56.352143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:28:56.352153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:28:56.352194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:28:56.352206 | orchestrator | 2026-02-03 04:28:56.352219 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-03 04:28:56.352231 | orchestrator | Tuesday 03 February 2026 04:28:40 +0000 (0:00:05.112) 0:02:43.200 ****** 2026-02-03 04:28:56.352240 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-03 04:28:56.352251 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-03 04:28:56.352261 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-03 04:28:56.352270 | orchestrator | 2026-02-03 04:28:56.352280 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-03 04:28:56.352290 | orchestrator | Tuesday 03 February 2026 04:28:42 +0000 (0:00:01.649) 0:02:44.850 ****** 2026-02-03 04:28:56.352300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:56.352319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:56.352330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:28:56.352352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:29:12.368489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:29:12.368611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:29:12.368656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:29:12.368671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:29:12.368683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:29:12.368696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:29:12.368739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:29:12.368753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:29:12.368766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:29:12.368787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:29:12.368799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:29:12.368811 | orchestrator | 2026-02-03 04:29:12.368826 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-03 04:29:12.368839 | orchestrator | Tuesday 03 February 2026 04:28:59 +0000 (0:00:17.493) 0:03:02.343 ****** 2026-02-03 04:29:12.368850 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:29:12.368863 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:29:12.368874 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:29:12.368885 | orchestrator | 2026-02-03 04:29:12.368897 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-03 04:29:12.368908 | orchestrator | Tuesday 03 February 2026 04:29:01 +0000 (0:00:02.042) 0:03:04.386 ****** 2026-02-03 04:29:12.368919 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-03 04:29:12.368930 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-03 04:29:12.368941 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-03 04:29:12.368952 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-03 04:29:12.368963 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-03 04:29:12.368976 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-03 04:29:12.368989 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-03 04:29:12.369002 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-03 04:29:12.369015 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-03 04:29:12.369027 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-03 04:29:12.369039 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-03 04:29:12.369052 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-03 04:29:12.369065 | orchestrator | 2026-02-03 04:29:12.369082 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-03 04:29:12.369095 | orchestrator | Tuesday 03 February 2026 04:29:06 +0000 (0:00:05.091) 0:03:09.478 ****** 2026-02-03 04:29:12.369108 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-03 04:29:12.369121 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-03 04:29:12.369149 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-03 04:29:20.932490 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-03 04:29:20.932573 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-03 04:29:20.932582 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-03 04:29:20.932589 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-03 04:29:20.932595 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-03 04:29:20.932602 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-03 04:29:20.932608 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-03 04:29:20.932614 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-03 04:29:20.932620 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-03 04:29:20.932626 | orchestrator | 2026-02-03 04:29:20.932634 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-03 04:29:20.932641 | orchestrator | Tuesday 03 February 2026 04:29:12 +0000 (0:00:05.497) 0:03:14.975 ****** 2026-02-03 04:29:20.932647 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-03 04:29:20.932653 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-03 04:29:20.932659 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-03 04:29:20.932665 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-03 04:29:20.932670 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-03 04:29:20.932676 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-03 04:29:20.932682 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-03 04:29:20.932688 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-03 04:29:20.932694 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-03 04:29:20.932700 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-03 04:29:20.932706 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-03 04:29:20.932711 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-03 04:29:20.932717 | orchestrator | 2026-02-03 04:29:20.932724 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-03 04:29:20.932730 | orchestrator | Tuesday 03 February 2026 04:29:17 +0000 (0:00:05.248) 0:03:20.223 ****** 2026-02-03 04:29:20.932738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:29:20.932748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:29:20.932803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 04:29:20.932812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:29:20.932820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:29:20.932826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-03 04:29:20.932833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:29:20.932840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:29:20.932855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-03 04:29:20.932866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:30:44.913712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:30:44.913836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-03 04:30:44.913856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:30:44.913870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:30:44.913910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-03 04:30:44.913924 | orchestrator | 2026-02-03 04:30:44.913941 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-03 04:30:44.913953 | orchestrator | Tuesday 03 February 2026 04:29:21 +0000 (0:00:04.028) 0:03:24.251 ****** 2026-02-03 04:30:44.913965 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:30:44.913994 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:30:44.914006 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:30:44.914082 | orchestrator | 2026-02-03 04:30:44.914097 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-03 04:30:44.914108 | orchestrator | Tuesday 03 February 2026 04:29:22 +0000 (0:00:00.566) 0:03:24.818 ****** 2026-02-03 04:30:44.914119 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914129 | orchestrator | 2026-02-03 04:30:44.914140 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-03 04:30:44.914151 | orchestrator | Tuesday 03 February 2026 04:29:24 +0000 (0:00:02.113) 0:03:26.931 ****** 2026-02-03 04:30:44.914162 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914173 | orchestrator | 2026-02-03 04:30:44.914183 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-03 04:30:44.914194 | orchestrator | Tuesday 03 February 2026 04:29:26 +0000 (0:00:02.165) 0:03:29.096 ****** 2026-02-03 04:30:44.914205 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914216 | orchestrator | 2026-02-03 04:30:44.914227 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-03 04:30:44.914241 | orchestrator | Tuesday 03 February 2026 04:29:28 +0000 (0:00:02.240) 0:03:31.337 ****** 2026-02-03 04:30:44.914273 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914286 | orchestrator | 2026-02-03 04:30:44.914298 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-03 04:30:44.914308 | orchestrator | Tuesday 03 February 2026 04:29:31 +0000 (0:00:02.301) 0:03:33.639 ****** 2026-02-03 04:30:44.914319 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914331 | orchestrator | 2026-02-03 04:30:44.914341 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-03 04:30:44.914352 | orchestrator | Tuesday 03 February 2026 04:29:53 +0000 (0:00:22.252) 0:03:55.891 ****** 2026-02-03 04:30:44.914362 | orchestrator | 2026-02-03 04:30:44.914373 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-03 04:30:44.914384 | orchestrator | Tuesday 03 February 2026 04:29:53 +0000 (0:00:00.072) 0:03:55.963 ****** 2026-02-03 04:30:44.914395 | orchestrator | 2026-02-03 04:30:44.914406 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-03 04:30:44.914442 | orchestrator | Tuesday 03 February 2026 04:29:53 +0000 (0:00:00.072) 0:03:56.035 ****** 2026-02-03 04:30:44.914455 | orchestrator | 2026-02-03 04:30:44.914466 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-03 04:30:44.914477 | orchestrator | Tuesday 03 February 2026 04:29:53 +0000 (0:00:00.072) 0:03:56.107 ****** 2026-02-03 04:30:44.914489 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914497 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:30:44.914504 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:30:44.914511 | orchestrator | 2026-02-03 04:30:44.914517 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-03 04:30:44.914537 | orchestrator | Tuesday 03 February 2026 04:30:05 +0000 (0:00:12.151) 0:04:08.259 ****** 2026-02-03 04:30:44.914544 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914550 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:30:44.914557 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:30:44.914564 | orchestrator | 2026-02-03 04:30:44.914570 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-03 04:30:44.914577 | orchestrator | Tuesday 03 February 2026 04:30:12 +0000 (0:00:07.214) 0:04:15.474 ****** 2026-02-03 04:30:44.914584 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914590 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:30:44.914597 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:30:44.914604 | orchestrator | 2026-02-03 04:30:44.914611 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-03 04:30:44.914617 | orchestrator | Tuesday 03 February 2026 04:30:23 +0000 (0:00:10.665) 0:04:26.139 ****** 2026-02-03 04:30:44.914624 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914635 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:30:44.914647 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:30:44.914654 | orchestrator | 2026-02-03 04:30:44.914661 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-03 04:30:44.914667 | orchestrator | Tuesday 03 February 2026 04:30:33 +0000 (0:00:10.470) 0:04:36.610 ****** 2026-02-03 04:30:44.914674 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:30:44.914680 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:30:44.914687 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:30:44.914693 | orchestrator | 2026-02-03 04:30:44.914700 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:30:44.914708 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 04:30:44.914716 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 04:30:44.914722 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 04:30:44.914729 | orchestrator | 2026-02-03 04:30:44.914735 | orchestrator | 2026-02-03 04:30:44.914742 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:30:44.914749 | orchestrator | Tuesday 03 February 2026 04:30:44 +0000 (0:00:10.901) 0:04:47.512 ****** 2026-02-03 04:30:44.914759 | orchestrator | =============================================================================== 2026-02-03 04:30:44.914769 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.25s 2026-02-03 04:30:44.914778 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.49s 2026-02-03 04:30:44.914787 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.50s 2026-02-03 04:30:44.914800 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.55s 2026-02-03 04:30:44.914824 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.15s 2026-02-03 04:30:44.914835 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.90s 2026-02-03 04:30:44.914846 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.67s 2026-02-03 04:30:44.914857 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.47s 2026-02-03 04:30:44.914867 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.73s 2026-02-03 04:30:44.914878 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.12s 2026-02-03 04:30:44.914888 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.24s 2026-02-03 04:30:44.914899 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.21s 2026-02-03 04:30:44.914939 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.84s 2026-02-03 04:30:44.914951 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.44s 2026-02-03 04:30:44.914987 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.58s 2026-02-03 04:30:45.329210 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.50s 2026-02-03 04:30:45.329309 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.25s 2026-02-03 04:30:45.329323 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.25s 2026-02-03 04:30:45.329335 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.11s 2026-02-03 04:30:45.329346 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.09s 2026-02-03 04:30:47.919366 | orchestrator | 2026-02-03 04:30:47 | INFO  | Task ca486c51-4a1c-4e3d-a2d5-4eaa60f27efb (ceilometer) was prepared for execution. 2026-02-03 04:30:47.919620 | orchestrator | 2026-02-03 04:30:47 | INFO  | It takes a moment until task ca486c51-4a1c-4e3d-a2d5-4eaa60f27efb (ceilometer) has been started and output is visible here. 2026-02-03 04:31:11.919931 | orchestrator | 2026-02-03 04:31:11.920033 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:31:11.920046 | orchestrator | 2026-02-03 04:31:11.920056 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:31:11.920065 | orchestrator | Tuesday 03 February 2026 04:30:52 +0000 (0:00:00.284) 0:00:00.284 ****** 2026-02-03 04:31:11.920075 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:31:11.920087 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:31:11.920096 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:31:11.920105 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:31:11.920114 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:31:11.920123 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:31:11.920132 | orchestrator | 2026-02-03 04:31:11.920141 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:31:11.920150 | orchestrator | Tuesday 03 February 2026 04:30:53 +0000 (0:00:00.774) 0:00:01.058 ****** 2026-02-03 04:31:11.920160 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-03 04:31:11.920169 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-03 04:31:11.920178 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-03 04:31:11.920186 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-03 04:31:11.920195 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-03 04:31:11.920204 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-03 04:31:11.920213 | orchestrator | 2026-02-03 04:31:11.920222 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-03 04:31:11.920231 | orchestrator | 2026-02-03 04:31:11.920240 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-03 04:31:11.920249 | orchestrator | Tuesday 03 February 2026 04:30:53 +0000 (0:00:00.585) 0:00:01.644 ****** 2026-02-03 04:31:11.920335 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:31:11.920346 | orchestrator | 2026-02-03 04:31:11.920378 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-03 04:31:11.920387 | orchestrator | Tuesday 03 February 2026 04:30:55 +0000 (0:00:01.122) 0:00:02.767 ****** 2026-02-03 04:31:11.920397 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:11.920406 | orchestrator | 2026-02-03 04:31:11.920415 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-03 04:31:11.920424 | orchestrator | Tuesday 03 February 2026 04:30:55 +0000 (0:00:00.122) 0:00:02.890 ****** 2026-02-03 04:31:11.920472 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:11.920503 | orchestrator | 2026-02-03 04:31:11.920513 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-03 04:31:11.920523 | orchestrator | Tuesday 03 February 2026 04:30:55 +0000 (0:00:00.148) 0:00:03.038 ****** 2026-02-03 04:31:11.920533 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:31:11.920543 | orchestrator | 2026-02-03 04:31:11.920553 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-03 04:31:11.920562 | orchestrator | Tuesday 03 February 2026 04:30:59 +0000 (0:00:03.785) 0:00:06.824 ****** 2026-02-03 04:31:11.920572 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:31:11.920582 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-03 04:31:11.920591 | orchestrator | 2026-02-03 04:31:11.920601 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-03 04:31:11.920611 | orchestrator | Tuesday 03 February 2026 04:31:03 +0000 (0:00:03.976) 0:00:10.800 ****** 2026-02-03 04:31:11.920621 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:31:11.920631 | orchestrator | 2026-02-03 04:31:11.920664 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-03 04:31:11.920673 | orchestrator | Tuesday 03 February 2026 04:31:06 +0000 (0:00:03.230) 0:00:14.031 ****** 2026-02-03 04:31:11.920683 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-03 04:31:11.920693 | orchestrator | 2026-02-03 04:31:11.920703 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-03 04:31:11.920712 | orchestrator | Tuesday 03 February 2026 04:31:10 +0000 (0:00:03.910) 0:00:17.941 ****** 2026-02-03 04:31:11.920722 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:11.920732 | orchestrator | 2026-02-03 04:31:11.920742 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-03 04:31:11.920752 | orchestrator | Tuesday 03 February 2026 04:31:10 +0000 (0:00:00.135) 0:00:18.076 ****** 2026-02-03 04:31:11.920765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:11.920795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:11.920805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:11.920815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:11.920834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:11.920843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:11.920852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:11.920867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:17.103277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:17.103405 | orchestrator | 2026-02-03 04:31:17.103422 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-03 04:31:17.103492 | orchestrator | Tuesday 03 February 2026 04:31:11 +0000 (0:00:01.570) 0:00:19.647 ****** 2026-02-03 04:31:17.103505 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-03 04:31:17.103516 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-03 04:31:17.103527 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:31:17.103538 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 04:31:17.103549 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 04:31:17.103560 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 04:31:17.103570 | orchestrator | 2026-02-03 04:31:17.103582 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-03 04:31:17.103594 | orchestrator | Tuesday 03 February 2026 04:31:13 +0000 (0:00:01.741) 0:00:21.389 ****** 2026-02-03 04:31:17.103605 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:31:17.103617 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:31:17.103627 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:31:17.103638 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:31:17.103648 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:31:17.103659 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:31:17.103670 | orchestrator | 2026-02-03 04:31:17.103680 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-03 04:31:17.103692 | orchestrator | Tuesday 03 February 2026 04:31:14 +0000 (0:00:00.636) 0:00:22.025 ****** 2026-02-03 04:31:17.103702 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:17.103713 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:17.103725 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:17.103736 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:17.103747 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:17.103757 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:17.103768 | orchestrator | 2026-02-03 04:31:17.103779 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-03 04:31:17.103791 | orchestrator | Tuesday 03 February 2026 04:31:15 +0000 (0:00:00.843) 0:00:22.869 ****** 2026-02-03 04:31:17.103802 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:31:17.103815 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:31:17.103828 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:31:17.103841 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:31:17.103854 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:31:17.103909 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:31:17.103923 | orchestrator | 2026-02-03 04:31:17.103940 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-03 04:31:17.103953 | orchestrator | Tuesday 03 February 2026 04:31:15 +0000 (0:00:00.672) 0:00:23.542 ****** 2026-02-03 04:31:17.103968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:17.103983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:17.104024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:17.104038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:17.104052 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:17.104065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:17.104078 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:17.104091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:17.104110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:17.104123 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:17.104136 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:17.104150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:17.104170 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:17.104190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:22.172038 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:22.172998 | orchestrator | 2026-02-03 04:31:22.173032 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-03 04:31:22.173044 | orchestrator | Tuesday 03 February 2026 04:31:17 +0000 (0:00:01.294) 0:00:24.836 ****** 2026-02-03 04:31:22.173055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:22.173068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:22.173078 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:22.173103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:22.173113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:22.173148 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:22.173165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:22.173182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:22.173197 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:22.173237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:22.173255 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:22.173271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:22.173286 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:22.173305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:22.173315 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:22.173332 | orchestrator | 2026-02-03 04:31:22.173342 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-03 04:31:22.173352 | orchestrator | Tuesday 03 February 2026 04:31:17 +0000 (0:00:00.842) 0:00:25.679 ****** 2026-02-03 04:31:22.173362 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:31:22.173370 | orchestrator | 2026-02-03 04:31:22.173379 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-03 04:31:22.173389 | orchestrator | Tuesday 03 February 2026 04:31:18 +0000 (0:00:00.728) 0:00:26.408 ****** 2026-02-03 04:31:22.173397 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:31:22.173413 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:31:22.173427 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:31:22.173467 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:31:22.173482 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:31:22.173497 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:31:22.173512 | orchestrator | 2026-02-03 04:31:22.173526 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-03 04:31:22.173537 | orchestrator | Tuesday 03 February 2026 04:31:19 +0000 (0:00:00.906) 0:00:27.314 ****** 2026-02-03 04:31:22.173547 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:31:22.173562 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:31:22.173577 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:31:22.173590 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:31:22.173606 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:31:22.173620 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:31:22.173634 | orchestrator | 2026-02-03 04:31:22.173643 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-03 04:31:22.173651 | orchestrator | Tuesday 03 February 2026 04:31:20 +0000 (0:00:01.005) 0:00:28.319 ****** 2026-02-03 04:31:22.173660 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:22.173669 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:22.173677 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:22.173686 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:22.173694 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:22.173703 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:22.173711 | orchestrator | 2026-02-03 04:31:22.173720 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-03 04:31:22.173728 | orchestrator | Tuesday 03 February 2026 04:31:21 +0000 (0:00:00.907) 0:00:29.227 ****** 2026-02-03 04:31:22.173737 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:22.173746 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:22.173754 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:22.173763 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:22.173771 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:22.173780 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:22.173788 | orchestrator | 2026-02-03 04:31:27.809815 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-03 04:31:27.809932 | orchestrator | Tuesday 03 February 2026 04:31:22 +0000 (0:00:00.682) 0:00:29.909 ****** 2026-02-03 04:31:27.809949 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:31:27.809965 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-03 04:31:27.809979 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-03 04:31:27.809994 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 04:31:27.810008 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 04:31:27.810082 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 04:31:27.810097 | orchestrator | 2026-02-03 04:31:27.810113 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-03 04:31:27.810128 | orchestrator | Tuesday 03 February 2026 04:31:23 +0000 (0:00:01.617) 0:00:31.527 ****** 2026-02-03 04:31:27.810202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:27.810250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:27.810266 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:27.810296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:27.810311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:27.810325 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:27.810340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:27.810454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:27.810472 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:27.810487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:27.810512 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:27.810529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:27.810545 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:27.810567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:27.810583 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:27.810598 | orchestrator | 2026-02-03 04:31:27.810614 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-03 04:31:27.810630 | orchestrator | Tuesday 03 February 2026 04:31:24 +0000 (0:00:00.832) 0:00:32.360 ****** 2026-02-03 04:31:27.810645 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:27.810660 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:27.810676 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:27.810732 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:27.810747 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:27.810760 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:27.810774 | orchestrator | 2026-02-03 04:31:27.810788 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-03 04:31:27.810801 | orchestrator | Tuesday 03 February 2026 04:31:25 +0000 (0:00:00.839) 0:00:33.199 ****** 2026-02-03 04:31:27.810815 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-03 04:31:27.810829 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:31:27.810842 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-03 04:31:27.810856 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 04:31:27.810870 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 04:31:27.810883 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 04:31:27.810896 | orchestrator | 2026-02-03 04:31:27.810910 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-03 04:31:27.810925 | orchestrator | Tuesday 03 February 2026 04:31:27 +0000 (0:00:01.896) 0:00:35.096 ****** 2026-02-03 04:31:27.810948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:33.755093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:33.755218 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:33.755247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:33.755288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:33.755311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:33.755330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:33.755349 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:33.755368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:33.755417 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:33.755502 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:33.755549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:33.755570 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:33.755590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:33.755610 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:33.755630 | orchestrator | 2026-02-03 04:31:33.755651 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-03 04:31:33.755673 | orchestrator | Tuesday 03 February 2026 04:31:28 +0000 (0:00:01.179) 0:00:36.275 ****** 2026-02-03 04:31:33.755693 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:33.755713 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:33.755740 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:33.755762 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:33.755782 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:33.755801 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:33.755820 | orchestrator | 2026-02-03 04:31:33.755839 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-03 04:31:33.755859 | orchestrator | Tuesday 03 February 2026 04:31:29 +0000 (0:00:00.839) 0:00:37.115 ****** 2026-02-03 04:31:33.755879 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:33.755899 | orchestrator | 2026-02-03 04:31:33.755919 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-03 04:31:33.755938 | orchestrator | Tuesday 03 February 2026 04:31:29 +0000 (0:00:00.152) 0:00:37.267 ****** 2026-02-03 04:31:33.755958 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:33.755978 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:33.755997 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:33.756016 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:33.756034 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:33.756053 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:33.756072 | orchestrator | 2026-02-03 04:31:33.756090 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-03 04:31:33.756123 | orchestrator | Tuesday 03 February 2026 04:31:30 +0000 (0:00:00.600) 0:00:37.868 ****** 2026-02-03 04:31:33.756143 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:31:33.756165 | orchestrator | 2026-02-03 04:31:33.756184 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-03 04:31:33.756203 | orchestrator | Tuesday 03 February 2026 04:31:31 +0000 (0:00:01.326) 0:00:39.195 ****** 2026-02-03 04:31:33.756223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:33.756256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:34.286591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:34.286677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:34.286704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:34.286734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:34.286742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:34.286750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:34.286770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:34.286778 | orchestrator | 2026-02-03 04:31:34.286788 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-03 04:31:34.286796 | orchestrator | Tuesday 03 February 2026 04:31:33 +0000 (0:00:02.293) 0:00:41.488 ****** 2026-02-03 04:31:34.286804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:34.286815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:34.286829 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:34.286837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:34.286844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:34.286852 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:34.286859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:34.286872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:36.229635 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:36.229721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:36.229734 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:36.229818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:36.229846 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:36.229855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:36.229863 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:36.229870 | orchestrator | 2026-02-03 04:31:36.229879 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-03 04:31:36.229889 | orchestrator | Tuesday 03 February 2026 04:31:34 +0000 (0:00:00.874) 0:00:42.363 ****** 2026-02-03 04:31:36.229897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:36.229906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:36.229932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:36.229941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:36.229959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:36.229967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:36.229975 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:36.229983 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:36.229991 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:36.229999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:36.230007 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:36.230015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:36.230067 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:36.230082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:43.860207 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:43.860321 | orchestrator | 2026-02-03 04:31:43.860339 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-03 04:31:43.860353 | orchestrator | Tuesday 03 February 2026 04:31:36 +0000 (0:00:01.596) 0:00:43.959 ****** 2026-02-03 04:31:43.860384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:43.860400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:43.860412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:43.860425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:43.860496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:43.860531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:43.860573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:43.860587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:43.860599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:43.860610 | orchestrator | 2026-02-03 04:31:43.860622 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-03 04:31:43.860633 | orchestrator | Tuesday 03 February 2026 04:31:38 +0000 (0:00:02.613) 0:00:46.573 ****** 2026-02-03 04:31:43.860645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:43.860657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:43.860683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:53.770828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:53.770935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:53.770976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:53.770991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:53.771003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:53.771038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:53.771052 | orchestrator | 2026-02-03 04:31:53.771066 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-03 04:31:53.771079 | orchestrator | Tuesday 03 February 2026 04:31:43 +0000 (0:00:05.021) 0:00:51.595 ****** 2026-02-03 04:31:53.771106 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:31:53.771120 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-03 04:31:53.771131 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-03 04:31:53.771141 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 04:31:53.771152 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 04:31:53.771163 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 04:31:53.771174 | orchestrator | 2026-02-03 04:31:53.771185 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-03 04:31:53.771196 | orchestrator | Tuesday 03 February 2026 04:31:45 +0000 (0:00:01.637) 0:00:53.233 ****** 2026-02-03 04:31:53.771207 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:53.771218 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:53.771235 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:53.771246 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:53.771257 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:53.771268 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:53.771279 | orchestrator | 2026-02-03 04:31:53.771290 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-03 04:31:53.771301 | orchestrator | Tuesday 03 February 2026 04:31:46 +0000 (0:00:00.665) 0:00:53.898 ****** 2026-02-03 04:31:53.771312 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:53.771323 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:53.771334 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:31:53.771345 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:53.771357 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:31:53.771370 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:31:53.771382 | orchestrator | 2026-02-03 04:31:53.771394 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-03 04:31:53.771407 | orchestrator | Tuesday 03 February 2026 04:31:47 +0000 (0:00:01.742) 0:00:55.641 ****** 2026-02-03 04:31:53.771419 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:53.771432 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:53.771522 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:53.771537 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:31:53.771549 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:31:53.771562 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:31:53.771574 | orchestrator | 2026-02-03 04:31:53.771586 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-03 04:31:53.771599 | orchestrator | Tuesday 03 February 2026 04:31:49 +0000 (0:00:01.491) 0:00:57.132 ****** 2026-02-03 04:31:53.771611 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:31:53.771624 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-03 04:31:53.771637 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-03 04:31:53.771648 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 04:31:53.771659 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 04:31:53.771669 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 04:31:53.771680 | orchestrator | 2026-02-03 04:31:53.771702 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-03 04:31:53.771713 | orchestrator | Tuesday 03 February 2026 04:31:51 +0000 (0:00:01.722) 0:00:58.855 ****** 2026-02-03 04:31:53.771725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:53.771737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:53.771749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:53.771775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:54.661421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:54.661525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:31:54.661555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:54.661565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:54.661572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:31:54.661579 | orchestrator | 2026-02-03 04:31:54.661588 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-03 04:31:54.661596 | orchestrator | Tuesday 03 February 2026 04:31:53 +0000 (0:00:02.649) 0:01:01.505 ****** 2026-02-03 04:31:54.661614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:54.661635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:54.661643 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:54.661652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:54.661670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:54.661677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:54.661684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:54.661690 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:54.661697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:54.661704 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:54.661714 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:54.661724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:58.203801 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:58.203908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:58.203929 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:58.203942 | orchestrator | 2026-02-03 04:31:58.203955 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-03 04:31:58.203967 | orchestrator | Tuesday 03 February 2026 04:31:54 +0000 (0:00:00.896) 0:01:02.401 ****** 2026-02-03 04:31:58.203979 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:58.203990 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:58.204001 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:58.204012 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:58.204023 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:58.204035 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:58.204046 | orchestrator | 2026-02-03 04:31:58.204057 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-03 04:31:58.204069 | orchestrator | Tuesday 03 February 2026 04:31:55 +0000 (0:00:00.833) 0:01:03.235 ****** 2026-02-03 04:31:58.204082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:58.204096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:58.204109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:58.204121 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:31:58.204152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:58.204187 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:31:58.204216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-03 04:31:58.204229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 04:31:58.204241 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:31:58.204252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:58.204264 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:31:58.204275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:58.204287 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:31:58.204304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-03 04:31:58.204326 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:31:58.204338 | orchestrator | 2026-02-03 04:31:58.204352 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-03 04:31:58.204365 | orchestrator | Tuesday 03 February 2026 04:31:56 +0000 (0:00:00.912) 0:01:04.147 ****** 2026-02-03 04:31:58.204387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:32:31.216672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:32:31.216790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:32:31.216808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-03 04:32:31.216821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:32:31.216872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:32:31.216887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:32:31.216917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-03 04:32:31.216930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-03 04:32:31.216943 | orchestrator | 2026-02-03 04:32:31.216957 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-03 04:32:31.216970 | orchestrator | Tuesday 03 February 2026 04:31:58 +0000 (0:00:01.790) 0:01:05.938 ****** 2026-02-03 04:32:31.216982 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:32:31.216994 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:32:31.217005 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:32:31.217016 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:32:31.217027 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:32:31.217038 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:32:31.217049 | orchestrator | 2026-02-03 04:32:31.217068 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-03 04:32:31.217087 | orchestrator | Tuesday 03 February 2026 04:31:58 +0000 (0:00:00.643) 0:01:06.582 ****** 2026-02-03 04:32:31.217106 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:32:31.217125 | orchestrator | 2026-02-03 04:32:31.217144 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-03 04:32:31.217161 | orchestrator | Tuesday 03 February 2026 04:32:03 +0000 (0:00:04.770) 0:01:11.353 ****** 2026-02-03 04:32:31.217172 | orchestrator | 2026-02-03 04:32:31.217183 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-03 04:32:31.217195 | orchestrator | Tuesday 03 February 2026 04:32:03 +0000 (0:00:00.075) 0:01:11.428 ****** 2026-02-03 04:32:31.217216 | orchestrator | 2026-02-03 04:32:31.217231 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-03 04:32:31.217250 | orchestrator | Tuesday 03 February 2026 04:32:03 +0000 (0:00:00.077) 0:01:11.506 ****** 2026-02-03 04:32:31.217279 | orchestrator | 2026-02-03 04:32:31.217300 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-03 04:32:31.217318 | orchestrator | Tuesday 03 February 2026 04:32:04 +0000 (0:00:00.274) 0:01:11.780 ****** 2026-02-03 04:32:31.217336 | orchestrator | 2026-02-03 04:32:31.217354 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-03 04:32:31.217373 | orchestrator | Tuesday 03 February 2026 04:32:04 +0000 (0:00:00.072) 0:01:11.853 ****** 2026-02-03 04:32:31.217392 | orchestrator | 2026-02-03 04:32:31.217411 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-03 04:32:31.217432 | orchestrator | Tuesday 03 February 2026 04:32:04 +0000 (0:00:00.068) 0:01:11.922 ****** 2026-02-03 04:32:31.217451 | orchestrator | 2026-02-03 04:32:31.217501 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-03 04:32:31.217519 | orchestrator | Tuesday 03 February 2026 04:32:04 +0000 (0:00:00.072) 0:01:11.994 ****** 2026-02-03 04:32:31.217532 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:32:31.217546 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:32:31.217558 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:32:31.217570 | orchestrator | 2026-02-03 04:32:31.217589 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-03 04:32:31.217615 | orchestrator | Tuesday 03 February 2026 04:32:10 +0000 (0:00:05.778) 0:01:17.773 ****** 2026-02-03 04:32:31.217632 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:32:31.217649 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:32:31.217667 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:32:31.217686 | orchestrator | 2026-02-03 04:32:31.217706 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-03 04:32:31.217725 | orchestrator | Tuesday 03 February 2026 04:32:19 +0000 (0:00:09.928) 0:01:27.701 ****** 2026-02-03 04:32:31.217741 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:32:31.217752 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:32:31.217763 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:32:31.217774 | orchestrator | 2026-02-03 04:32:31.217785 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:32:31.217797 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-03 04:32:31.217810 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 04:32:31.217834 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 04:32:31.770727 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-03 04:32:31.770828 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-03 04:32:31.770844 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-03 04:32:31.770857 | orchestrator | 2026-02-03 04:32:31.770870 | orchestrator | 2026-02-03 04:32:31.770882 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:32:31.770894 | orchestrator | Tuesday 03 February 2026 04:32:31 +0000 (0:00:11.241) 0:01:38.942 ****** 2026-02-03 04:32:31.770906 | orchestrator | =============================================================================== 2026-02-03 04:32:31.770943 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.24s 2026-02-03 04:32:31.770955 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.93s 2026-02-03 04:32:31.770966 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 5.78s 2026-02-03 04:32:31.770977 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.02s 2026-02-03 04:32:31.770988 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.77s 2026-02-03 04:32:31.771002 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.98s 2026-02-03 04:32:31.771022 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.91s 2026-02-03 04:32:31.771041 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.79s 2026-02-03 04:32:31.771060 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.23s 2026-02-03 04:32:31.771080 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.65s 2026-02-03 04:32:31.771101 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.61s 2026-02-03 04:32:31.771114 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.29s 2026-02-03 04:32:31.771125 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.90s 2026-02-03 04:32:31.771136 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.79s 2026-02-03 04:32:31.771147 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.74s 2026-02-03 04:32:31.771158 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.74s 2026-02-03 04:32:31.771169 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.72s 2026-02-03 04:32:31.771180 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.64s 2026-02-03 04:32:31.771191 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.62s 2026-02-03 04:32:31.771202 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.60s 2026-02-03 04:32:34.687642 | orchestrator | 2026-02-03 04:32:34 | INFO  | Task 732df70d-fecc-4d74-bbab-9145bfdb05fa (aodh) was prepared for execution. 2026-02-03 04:32:34.687753 | orchestrator | 2026-02-03 04:32:34 | INFO  | It takes a moment until task 732df70d-fecc-4d74-bbab-9145bfdb05fa (aodh) has been started and output is visible here. 2026-02-03 04:33:06.970707 | orchestrator | 2026-02-03 04:33:06.970815 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:33:06.970830 | orchestrator | 2026-02-03 04:33:06.970841 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:33:06.970852 | orchestrator | Tuesday 03 February 2026 04:32:39 +0000 (0:00:00.286) 0:00:00.286 ****** 2026-02-03 04:33:06.970862 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:33:06.970872 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:33:06.970882 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:33:06.970892 | orchestrator | 2026-02-03 04:33:06.970902 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:33:06.970929 | orchestrator | Tuesday 03 February 2026 04:32:39 +0000 (0:00:00.352) 0:00:00.639 ****** 2026-02-03 04:33:06.970940 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-03 04:33:06.970950 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-03 04:33:06.970959 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-03 04:33:06.970969 | orchestrator | 2026-02-03 04:33:06.970979 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-03 04:33:06.970988 | orchestrator | 2026-02-03 04:33:06.971001 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-03 04:33:06.971018 | orchestrator | Tuesday 03 February 2026 04:32:39 +0000 (0:00:00.470) 0:00:01.110 ****** 2026-02-03 04:33:06.971034 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:33:06.971076 | orchestrator | 2026-02-03 04:33:06.971092 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-03 04:33:06.971107 | orchestrator | Tuesday 03 February 2026 04:32:40 +0000 (0:00:00.588) 0:00:01.698 ****** 2026-02-03 04:33:06.971123 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-03 04:33:06.971138 | orchestrator | 2026-02-03 04:33:06.971154 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-03 04:33:06.971169 | orchestrator | Tuesday 03 February 2026 04:32:44 +0000 (0:00:03.537) 0:00:05.235 ****** 2026-02-03 04:33:06.971184 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-03 04:33:06.971201 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-03 04:33:06.971217 | orchestrator | 2026-02-03 04:33:06.971234 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-03 04:33:06.971250 | orchestrator | Tuesday 03 February 2026 04:32:50 +0000 (0:00:06.437) 0:00:11.673 ****** 2026-02-03 04:33:06.971268 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:33:06.971286 | orchestrator | 2026-02-03 04:33:06.971301 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-03 04:33:06.971316 | orchestrator | Tuesday 03 February 2026 04:32:53 +0000 (0:00:03.464) 0:00:15.138 ****** 2026-02-03 04:33:06.971332 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:33:06.971349 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-03 04:33:06.971365 | orchestrator | 2026-02-03 04:33:06.971382 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-03 04:33:06.971397 | orchestrator | Tuesday 03 February 2026 04:32:57 +0000 (0:00:03.893) 0:00:19.031 ****** 2026-02-03 04:33:06.971414 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:33:06.971430 | orchestrator | 2026-02-03 04:33:06.971447 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-03 04:33:06.971464 | orchestrator | Tuesday 03 February 2026 04:33:01 +0000 (0:00:03.196) 0:00:22.228 ****** 2026-02-03 04:33:06.971479 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-03 04:33:06.971569 | orchestrator | 2026-02-03 04:33:06.971590 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-03 04:33:06.971607 | orchestrator | Tuesday 03 February 2026 04:33:04 +0000 (0:00:03.742) 0:00:25.971 ****** 2026-02-03 04:33:06.971627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:06.971675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:06.971719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:06.971737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:06.971757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:06.971774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:06.971791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:06.971819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:08.400425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:08.400514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:08.400523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:08.400528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:08.400534 | orchestrator | 2026-02-03 04:33:08.400540 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-03 04:33:08.400547 | orchestrator | Tuesday 03 February 2026 04:33:06 +0000 (0:00:02.146) 0:00:28.118 ****** 2026-02-03 04:33:08.400552 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:33:08.400558 | orchestrator | 2026-02-03 04:33:08.400562 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-03 04:33:08.400567 | orchestrator | Tuesday 03 February 2026 04:33:07 +0000 (0:00:00.149) 0:00:28.267 ****** 2026-02-03 04:33:08.400572 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:33:08.400576 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:33:08.400581 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:33:08.400586 | orchestrator | 2026-02-03 04:33:08.400590 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-03 04:33:08.400595 | orchestrator | Tuesday 03 February 2026 04:33:07 +0000 (0:00:00.560) 0:00:28.827 ****** 2026-02-03 04:33:08.400600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 04:33:08.400633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 04:33:08.400642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:33:08.400647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 04:33:08.400652 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:33:08.400657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 04:33:08.400662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 04:33:08.400667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:33:08.400681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 04:33:13.421692 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:33:13.421819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 04:33:13.421840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 04:33:13.421855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:33:13.421866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 04:33:13.421878 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:33:13.421890 | orchestrator | 2026-02-03 04:33:13.421920 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-03 04:33:13.421969 | orchestrator | Tuesday 03 February 2026 04:33:08 +0000 (0:00:00.723) 0:00:29.550 ****** 2026-02-03 04:33:13.421983 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:33:13.421995 | orchestrator | 2026-02-03 04:33:13.422007 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-03 04:33:13.422078 | orchestrator | Tuesday 03 February 2026 04:33:09 +0000 (0:00:00.772) 0:00:30.323 ****** 2026-02-03 04:33:13.422091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:13.422136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:13.422159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:13.422208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:13.422223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:13.422247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:13.422260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:13.422290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:14.062156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:14.062255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:14.062272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:14.062309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:14.062322 | orchestrator | 2026-02-03 04:33:14.062337 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-03 04:33:14.062350 | orchestrator | Tuesday 03 February 2026 04:33:13 +0000 (0:00:04.247) 0:00:34.570 ****** 2026-02-03 04:33:14.062364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 04:33:14.062392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 04:33:14.062424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:33:14.062437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 04:33:14.062449 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:33:14.062462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 04:33:14.062620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 04:33:14.062644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:33:14.062658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 04:33:14.062671 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:33:14.062704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 04:33:15.174993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 04:33:15.175116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:33:15.175175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 04:33:15.175197 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:33:15.175219 | orchestrator | 2026-02-03 04:33:15.175241 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-03 04:33:15.175261 | orchestrator | Tuesday 03 February 2026 04:33:14 +0000 (0:00:00.641) 0:00:35.212 ****** 2026-02-03 04:33:15.175281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 04:33:15.175319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 04:33:15.175341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:33:15.175387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 04:33:15.175418 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:33:15.175438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 04:33:15.175457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 04:33:15.175477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:33:15.175496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 04:33:15.175579 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:33:15.175613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-03 04:33:19.590273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 04:33:19.590410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 04:33:19.590428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 04:33:19.590441 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:33:19.590454 | orchestrator | 2026-02-03 04:33:19.590467 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-03 04:33:19.590480 | orchestrator | Tuesday 03 February 2026 04:33:15 +0000 (0:00:01.112) 0:00:36.324 ****** 2026-02-03 04:33:19.590492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:19.590581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:19.590618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:19.590640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:19.590652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:19.590663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:19.590675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:19.590692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:19.590703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:19.590729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278821 | orchestrator | 2026-02-03 04:33:28.278832 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-03 04:33:28.278841 | orchestrator | Tuesday 03 February 2026 04:33:19 +0000 (0:00:04.415) 0:00:40.740 ****** 2026-02-03 04:33:28.278850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:28.278872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:28.278897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:28.278918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:28.278989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:33.256399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:33.256507 | orchestrator | 2026-02-03 04:33:33.256524 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-03 04:33:33.256592 | orchestrator | Tuesday 03 February 2026 04:33:28 +0000 (0:00:08.689) 0:00:49.430 ****** 2026-02-03 04:33:33.256606 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:33:33.256618 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:33:33.256629 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:33:33.256640 | orchestrator | 2026-02-03 04:33:33.256652 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-03 04:33:33.256663 | orchestrator | Tuesday 03 February 2026 04:33:30 +0000 (0:00:01.775) 0:00:51.205 ****** 2026-02-03 04:33:33.256675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:33.256707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:33.256744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-03 04:33:33.256774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:33.256787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:33.256799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-03 04:33:33.256810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:33.256847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:33.256867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:33.256886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:33:33.256915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:34:28.693982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-03 04:34:28.694160 | orchestrator | 2026-02-03 04:34:28.694180 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-03 04:34:28.694194 | orchestrator | Tuesday 03 February 2026 04:33:33 +0000 (0:00:03.208) 0:00:54.413 ****** 2026-02-03 04:34:28.694205 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:34:28.694218 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:34:28.694229 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:34:28.694240 | orchestrator | 2026-02-03 04:34:28.694252 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-03 04:34:28.694270 | orchestrator | Tuesday 03 February 2026 04:33:33 +0000 (0:00:00.321) 0:00:54.735 ****** 2026-02-03 04:34:28.694289 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:34:28.694315 | orchestrator | 2026-02-03 04:34:28.694342 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-03 04:34:28.694359 | orchestrator | Tuesday 03 February 2026 04:33:35 +0000 (0:00:02.202) 0:00:56.938 ****** 2026-02-03 04:34:28.694406 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:34:28.694425 | orchestrator | 2026-02-03 04:34:28.694444 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-03 04:34:28.694463 | orchestrator | Tuesday 03 February 2026 04:33:38 +0000 (0:00:02.324) 0:00:59.263 ****** 2026-02-03 04:34:28.694483 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:34:28.694503 | orchestrator | 2026-02-03 04:34:28.694523 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-03 04:34:28.694536 | orchestrator | Tuesday 03 February 2026 04:33:51 +0000 (0:00:13.202) 0:01:12.465 ****** 2026-02-03 04:34:28.694550 | orchestrator | 2026-02-03 04:34:28.694563 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-03 04:34:28.694576 | orchestrator | Tuesday 03 February 2026 04:33:51 +0000 (0:00:00.076) 0:01:12.541 ****** 2026-02-03 04:34:28.694588 | orchestrator | 2026-02-03 04:34:28.694601 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-03 04:34:28.694615 | orchestrator | Tuesday 03 February 2026 04:33:51 +0000 (0:00:00.070) 0:01:12.612 ****** 2026-02-03 04:34:28.694657 | orchestrator | 2026-02-03 04:34:28.694673 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-03 04:34:28.694703 | orchestrator | Tuesday 03 February 2026 04:33:51 +0000 (0:00:00.277) 0:01:12.890 ****** 2026-02-03 04:34:28.694716 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:34:28.694728 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:34:28.694741 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:34:28.694754 | orchestrator | 2026-02-03 04:34:28.694765 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-03 04:34:28.694776 | orchestrator | Tuesday 03 February 2026 04:34:02 +0000 (0:00:10.607) 0:01:23.498 ****** 2026-02-03 04:34:28.694787 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:34:28.694798 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:34:28.694809 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:34:28.694820 | orchestrator | 2026-02-03 04:34:28.694831 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-03 04:34:28.694842 | orchestrator | Tuesday 03 February 2026 04:34:07 +0000 (0:00:05.373) 0:01:28.871 ****** 2026-02-03 04:34:28.694852 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:34:28.694863 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:34:28.694874 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:34:28.694885 | orchestrator | 2026-02-03 04:34:28.694896 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-03 04:34:28.694907 | orchestrator | Tuesday 03 February 2026 04:34:17 +0000 (0:00:10.131) 0:01:39.003 ****** 2026-02-03 04:34:28.694917 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:34:28.694928 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:34:28.694939 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:34:28.694950 | orchestrator | 2026-02-03 04:34:28.694960 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:34:28.694973 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 04:34:28.694985 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 04:34:28.694996 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 04:34:28.695007 | orchestrator | 2026-02-03 04:34:28.695018 | orchestrator | 2026-02-03 04:34:28.695029 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:34:28.695040 | orchestrator | Tuesday 03 February 2026 04:34:28 +0000 (0:00:10.452) 0:01:49.456 ****** 2026-02-03 04:34:28.695051 | orchestrator | =============================================================================== 2026-02-03 04:34:28.695071 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.20s 2026-02-03 04:34:28.695082 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.61s 2026-02-03 04:34:28.695113 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.45s 2026-02-03 04:34:28.695125 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.13s 2026-02-03 04:34:28.695136 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.69s 2026-02-03 04:34:28.695147 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.44s 2026-02-03 04:34:28.695158 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 5.37s 2026-02-03 04:34:28.695169 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.42s 2026-02-03 04:34:28.695180 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.25s 2026-02-03 04:34:28.695191 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.89s 2026-02-03 04:34:28.695202 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.74s 2026-02-03 04:34:28.695213 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.54s 2026-02-03 04:34:28.695224 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.46s 2026-02-03 04:34:28.695235 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.21s 2026-02-03 04:34:28.695246 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.20s 2026-02-03 04:34:28.695257 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.32s 2026-02-03 04:34:28.695268 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.20s 2026-02-03 04:34:28.695279 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.15s 2026-02-03 04:34:28.695289 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.78s 2026-02-03 04:34:28.695300 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.11s 2026-02-03 04:34:31.174076 | orchestrator | 2026-02-03 04:34:31 | INFO  | Task de0c9ab7-8f51-44ba-9552-950a493d3d00 (kolla-ceph-rgw) was prepared for execution. 2026-02-03 04:34:31.174171 | orchestrator | 2026-02-03 04:34:31 | INFO  | It takes a moment until task de0c9ab7-8f51-44ba-9552-950a493d3d00 (kolla-ceph-rgw) has been started and output is visible here. 2026-02-03 04:35:08.775037 | orchestrator | 2026-02-03 04:35:08.775152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:35:08.775170 | orchestrator | 2026-02-03 04:35:08.775183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:35:08.775195 | orchestrator | Tuesday 03 February 2026 04:34:35 +0000 (0:00:00.304) 0:00:00.304 ****** 2026-02-03 04:35:08.775207 | orchestrator | ok: [testbed-manager] 2026-02-03 04:35:08.775219 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:35:08.775230 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:35:08.775259 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:35:08.775270 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:35:08.775281 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:35:08.775292 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:35:08.775303 | orchestrator | 2026-02-03 04:35:08.775314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:35:08.775325 | orchestrator | Tuesday 03 February 2026 04:34:36 +0000 (0:00:00.886) 0:00:01.191 ****** 2026-02-03 04:35:08.775337 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-03 04:35:08.775348 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-03 04:35:08.775359 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-03 04:35:08.775370 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-03 04:35:08.775381 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-03 04:35:08.775415 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-03 04:35:08.775426 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-03 04:35:08.775437 | orchestrator | 2026-02-03 04:35:08.775448 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-03 04:35:08.775459 | orchestrator | 2026-02-03 04:35:08.775470 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-03 04:35:08.775481 | orchestrator | Tuesday 03 February 2026 04:34:37 +0000 (0:00:00.774) 0:00:01.965 ****** 2026-02-03 04:35:08.775492 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:35:08.775505 | orchestrator | 2026-02-03 04:35:08.775516 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-03 04:35:08.775528 | orchestrator | Tuesday 03 February 2026 04:34:38 +0000 (0:00:01.660) 0:00:03.626 ****** 2026-02-03 04:35:08.775539 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-03 04:35:08.775550 | orchestrator | 2026-02-03 04:35:08.775561 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-03 04:35:08.775572 | orchestrator | Tuesday 03 February 2026 04:34:42 +0000 (0:00:03.927) 0:00:07.553 ****** 2026-02-03 04:35:08.775586 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-03 04:35:08.775600 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-03 04:35:08.775613 | orchestrator | 2026-02-03 04:35:08.775627 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-03 04:35:08.775640 | orchestrator | Tuesday 03 February 2026 04:34:49 +0000 (0:00:06.927) 0:00:14.481 ****** 2026-02-03 04:35:08.775653 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-03 04:35:08.775666 | orchestrator | 2026-02-03 04:35:08.775679 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-03 04:35:08.775714 | orchestrator | Tuesday 03 February 2026 04:34:53 +0000 (0:00:03.248) 0:00:17.730 ****** 2026-02-03 04:35:08.775728 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:35:08.775742 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-03 04:35:08.775755 | orchestrator | 2026-02-03 04:35:08.775766 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-03 04:35:08.775777 | orchestrator | Tuesday 03 February 2026 04:34:57 +0000 (0:00:04.055) 0:00:21.785 ****** 2026-02-03 04:35:08.775788 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-03 04:35:08.775799 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-03 04:35:08.775810 | orchestrator | 2026-02-03 04:35:08.775821 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-03 04:35:08.775831 | orchestrator | Tuesday 03 February 2026 04:35:03 +0000 (0:00:06.273) 0:00:28.059 ****** 2026-02-03 04:35:08.775842 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-03 04:35:08.775853 | orchestrator | 2026-02-03 04:35:08.775864 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:35:08.775875 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:08.775887 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:08.775898 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:08.775909 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:08.775920 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:08.775957 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:08.775970 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:08.775981 | orchestrator | 2026-02-03 04:35:08.775992 | orchestrator | 2026-02-03 04:35:08.776003 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:35:08.776014 | orchestrator | Tuesday 03 February 2026 04:35:08 +0000 (0:00:04.886) 0:00:32.945 ****** 2026-02-03 04:35:08.776030 | orchestrator | =============================================================================== 2026-02-03 04:35:08.776042 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.93s 2026-02-03 04:35:08.776053 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.27s 2026-02-03 04:35:08.776064 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.89s 2026-02-03 04:35:08.776075 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.06s 2026-02-03 04:35:08.776085 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.93s 2026-02-03 04:35:08.776096 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.25s 2026-02-03 04:35:08.776107 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.66s 2026-02-03 04:35:08.776118 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.89s 2026-02-03 04:35:08.776130 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2026-02-03 04:35:11.262831 | orchestrator | 2026-02-03 04:35:11 | INFO  | Task c09d5260-0847-4936-98b9-647ce504e41c (gnocchi) was prepared for execution. 2026-02-03 04:35:11.262943 | orchestrator | 2026-02-03 04:35:11 | INFO  | It takes a moment until task c09d5260-0847-4936-98b9-647ce504e41c (gnocchi) has been started and output is visible here. 2026-02-03 04:35:16.744584 | orchestrator | 2026-02-03 04:35:16.744703 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:35:16.744804 | orchestrator | 2026-02-03 04:35:16.744820 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:35:16.744834 | orchestrator | Tuesday 03 February 2026 04:35:15 +0000 (0:00:00.308) 0:00:00.308 ****** 2026-02-03 04:35:16.744849 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:35:16.744864 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:35:16.744878 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:35:16.744890 | orchestrator | 2026-02-03 04:35:16.744903 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:35:16.744916 | orchestrator | Tuesday 03 February 2026 04:35:16 +0000 (0:00:00.341) 0:00:00.650 ****** 2026-02-03 04:35:16.744929 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-03 04:35:16.744944 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-03 04:35:16.744958 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-03 04:35:16.744973 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-03 04:35:16.744987 | orchestrator | 2026-02-03 04:35:16.745001 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-03 04:35:16.745014 | orchestrator | skipping: no hosts matched 2026-02-03 04:35:16.745029 | orchestrator | 2026-02-03 04:35:16.745042 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:35:16.745055 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:16.745071 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:16.745116 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:35:16.745131 | orchestrator | 2026-02-03 04:35:16.745146 | orchestrator | 2026-02-03 04:35:16.745160 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:35:16.745174 | orchestrator | Tuesday 03 February 2026 04:35:16 +0000 (0:00:00.376) 0:00:01.027 ****** 2026-02-03 04:35:16.745189 | orchestrator | =============================================================================== 2026-02-03 04:35:16.745202 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-02-03 04:35:16.745216 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-03 04:35:19.306115 | orchestrator | 2026-02-03 04:35:19 | INFO  | Task 189a64d9-e168-484c-891d-d67c476bc257 (manila) was prepared for execution. 2026-02-03 04:35:19.306214 | orchestrator | 2026-02-03 04:35:19 | INFO  | It takes a moment until task 189a64d9-e168-484c-891d-d67c476bc257 (manila) has been started and output is visible here. 2026-02-03 04:36:00.975442 | orchestrator | 2026-02-03 04:36:00.975536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:36:00.975547 | orchestrator | 2026-02-03 04:36:00.975592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:36:00.975601 | orchestrator | Tuesday 03 February 2026 04:35:23 +0000 (0:00:00.272) 0:00:00.272 ****** 2026-02-03 04:36:00.975608 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:36:00.975616 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:36:00.975623 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:36:00.975629 | orchestrator | 2026-02-03 04:36:00.975636 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:36:00.975643 | orchestrator | Tuesday 03 February 2026 04:35:24 +0000 (0:00:00.334) 0:00:00.606 ****** 2026-02-03 04:36:00.975650 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-03 04:36:00.975657 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-03 04:36:00.975664 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-03 04:36:00.975670 | orchestrator | 2026-02-03 04:36:00.975677 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-03 04:36:00.975683 | orchestrator | 2026-02-03 04:36:00.975690 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-03 04:36:00.975710 | orchestrator | Tuesday 03 February 2026 04:35:24 +0000 (0:00:00.514) 0:00:01.120 ****** 2026-02-03 04:36:00.975716 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:36:00.975724 | orchestrator | 2026-02-03 04:36:00.975731 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-03 04:36:00.975737 | orchestrator | Tuesday 03 February 2026 04:35:25 +0000 (0:00:00.561) 0:00:01.682 ****** 2026-02-03 04:36:00.975744 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:36:00.975751 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:36:00.975758 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:36:00.975764 | orchestrator | 2026-02-03 04:36:00.975810 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-03 04:36:00.975822 | orchestrator | Tuesday 03 February 2026 04:35:25 +0000 (0:00:00.531) 0:00:02.214 ****** 2026-02-03 04:36:00.975833 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-03 04:36:00.975844 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-03 04:36:00.975853 | orchestrator | 2026-02-03 04:36:00.975860 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-03 04:36:00.975866 | orchestrator | Tuesday 03 February 2026 04:35:32 +0000 (0:00:06.415) 0:00:08.629 ****** 2026-02-03 04:36:00.975874 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-03 04:36:00.975897 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-03 04:36:00.975905 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-03 04:36:00.975911 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-03 04:36:00.975918 | orchestrator | 2026-02-03 04:36:00.975924 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-03 04:36:00.975931 | orchestrator | Tuesday 03 February 2026 04:35:44 +0000 (0:00:12.644) 0:00:21.274 ****** 2026-02-03 04:36:00.975938 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:36:00.975944 | orchestrator | 2026-02-03 04:36:00.975951 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-03 04:36:00.975957 | orchestrator | Tuesday 03 February 2026 04:35:47 +0000 (0:00:03.200) 0:00:24.475 ****** 2026-02-03 04:36:00.975964 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:36:00.975970 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-03 04:36:00.975977 | orchestrator | 2026-02-03 04:36:00.975983 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-03 04:36:00.975990 | orchestrator | Tuesday 03 February 2026 04:35:51 +0000 (0:00:03.767) 0:00:28.243 ****** 2026-02-03 04:36:00.975996 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:36:00.976003 | orchestrator | 2026-02-03 04:36:00.976011 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-03 04:36:00.976019 | orchestrator | Tuesday 03 February 2026 04:35:54 +0000 (0:00:03.219) 0:00:31.462 ****** 2026-02-03 04:36:00.976027 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-03 04:36:00.976034 | orchestrator | 2026-02-03 04:36:00.976042 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-03 04:36:00.976049 | orchestrator | Tuesday 03 February 2026 04:35:58 +0000 (0:00:03.761) 0:00:35.223 ****** 2026-02-03 04:36:00.976074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:00.976089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:00.976097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:00.976112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:00.976122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:00.976133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:00.976155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.039960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.040115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.040160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.040173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.040183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.040194 | orchestrator | 2026-02-03 04:36:12.040207 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-03 04:36:12.040218 | orchestrator | Tuesday 03 February 2026 04:36:01 +0000 (0:00:02.342) 0:00:37.565 ****** 2026-02-03 04:36:12.040230 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:36:12.040240 | orchestrator | 2026-02-03 04:36:12.040250 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-03 04:36:12.040260 | orchestrator | Tuesday 03 February 2026 04:36:01 +0000 (0:00:00.600) 0:00:38.166 ****** 2026-02-03 04:36:12.040271 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:36:12.040281 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:36:12.040290 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:36:12.040300 | orchestrator | 2026-02-03 04:36:12.040310 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-03 04:36:12.040319 | orchestrator | Tuesday 03 February 2026 04:36:02 +0000 (0:00:01.042) 0:00:39.208 ****** 2026-02-03 04:36:12.040336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-03 04:36:12.040377 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-03 04:36:12.040396 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-03 04:36:12.040426 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-03 04:36:12.040453 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-03 04:36:12.040470 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-03 04:36:12.040482 | orchestrator | 2026-02-03 04:36:12.040494 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-03 04:36:12.040506 | orchestrator | Tuesday 03 February 2026 04:36:04 +0000 (0:00:01.902) 0:00:41.111 ****** 2026-02-03 04:36:12.040518 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-03 04:36:12.040529 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-03 04:36:12.040541 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-03 04:36:12.040553 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-03 04:36:12.040565 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-03 04:36:12.040577 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-03 04:36:12.040588 | orchestrator | 2026-02-03 04:36:12.040600 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-03 04:36:12.040612 | orchestrator | Tuesday 03 February 2026 04:36:05 +0000 (0:00:01.235) 0:00:42.346 ****** 2026-02-03 04:36:12.040624 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-03 04:36:12.040635 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-03 04:36:12.040647 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-03 04:36:12.040659 | orchestrator | 2026-02-03 04:36:12.040671 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-03 04:36:12.040683 | orchestrator | Tuesday 03 February 2026 04:36:06 +0000 (0:00:00.728) 0:00:43.075 ****** 2026-02-03 04:36:12.040694 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:36:12.040706 | orchestrator | 2026-02-03 04:36:12.040717 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-03 04:36:12.040729 | orchestrator | Tuesday 03 February 2026 04:36:06 +0000 (0:00:00.162) 0:00:43.237 ****** 2026-02-03 04:36:12.040740 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:36:12.040751 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:36:12.040764 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:36:12.040775 | orchestrator | 2026-02-03 04:36:12.040827 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-03 04:36:12.040847 | orchestrator | Tuesday 03 February 2026 04:36:07 +0000 (0:00:00.511) 0:00:43.748 ****** 2026-02-03 04:36:12.040864 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:36:12.040881 | orchestrator | 2026-02-03 04:36:12.040892 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-03 04:36:12.040909 | orchestrator | Tuesday 03 February 2026 04:36:07 +0000 (0:00:00.664) 0:00:44.413 ****** 2026-02-03 04:36:12.040931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:12.944761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:12.944926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:12.944943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.944957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.944990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.945020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.945040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.945052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.945064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.945076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.945087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:12.945106 | orchestrator | 2026-02-03 04:36:12.945120 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-03 04:36:12.945133 | orchestrator | Tuesday 03 February 2026 04:36:12 +0000 (0:00:04.236) 0:00:48.650 ****** 2026-02-03 04:36:12.945153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 04:36:13.645346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:36:13.645451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:13.645468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 04:36:13.645481 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:36:13.645495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 04:36:13.645536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:36:13.645549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:13.645585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 04:36:13.645599 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:36:13.645611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 04:36:13.645623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:36:13.645646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:13.645657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 04:36:13.645669 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:36:13.645680 | orchestrator | 2026-02-03 04:36:13.645693 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-03 04:36:13.645706 | orchestrator | Tuesday 03 February 2026 04:36:13 +0000 (0:00:00.895) 0:00:49.545 ****** 2026-02-03 04:36:13.645730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 04:36:18.294425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:36:18.294528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:18.294544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 04:36:18.294582 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:36:18.294597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 04:36:18.294611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:36:18.294640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:18.294673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 04:36:18.294686 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:36:18.294697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 04:36:18.294718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:36:18.294729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:18.294741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 04:36:18.294752 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:36:18.294764 | orchestrator | 2026-02-03 04:36:18.294778 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-03 04:36:18.294791 | orchestrator | Tuesday 03 February 2026 04:36:13 +0000 (0:00:00.952) 0:00:50.497 ****** 2026-02-03 04:36:18.294905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:25.318256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:25.318453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:25.318474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:25.318489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:25.318501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:25.318545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:25.318559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:25.318578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:25.318591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:25.318603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:25.318614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:25.318626 | orchestrator | 2026-02-03 04:36:25.318639 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-03 04:36:25.318652 | orchestrator | Tuesday 03 February 2026 04:36:18 +0000 (0:00:04.606) 0:00:55.104 ****** 2026-02-03 04:36:25.318676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:29.781641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:29.781723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:36:29.781733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:29.781742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:29.781763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:29.781784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:29.781822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:29.781848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:29.781856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:29.781863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:29.781870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:36:29.781877 | orchestrator | 2026-02-03 04:36:29.781886 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-03 04:36:29.781898 | orchestrator | Tuesday 03 February 2026 04:36:25 +0000 (0:00:06.810) 0:01:01.914 ****** 2026-02-03 04:36:29.781906 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-03 04:36:29.781913 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-03 04:36:29.781920 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-03 04:36:29.781933 | orchestrator | 2026-02-03 04:36:29.781939 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-03 04:36:29.781946 | orchestrator | Tuesday 03 February 2026 04:36:29 +0000 (0:00:03.814) 0:01:05.728 ****** 2026-02-03 04:36:29.781959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 04:36:33.257175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:36:33.257313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:33.257332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 04:36:33.257350 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:36:33.257373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 04:36:33.257441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:36:33.257461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:33.257505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 04:36:33.257528 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:36:33.257547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-03 04:36:33.257568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 04:36:33.257587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 04:36:33.257707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 04:36:33.257734 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:36:33.257756 | orchestrator | 2026-02-03 04:36:33.257779 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-03 04:36:33.257801 | orchestrator | Tuesday 03 February 2026 04:36:29 +0000 (0:00:00.646) 0:01:06.375 ****** 2026-02-03 04:36:33.257862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:37:14.166665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:37:14.166779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-03 04:37:14.166797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:37:14.166849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:37:14.166863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-03 04:37:14.166891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:37:14.166905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:37:14.166916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-03 04:37:14.167044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:37:14.167073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:37:14.167085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-03 04:37:14.167097 | orchestrator | 2026-02-03 04:37:14.167110 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-03 04:37:14.167123 | orchestrator | Tuesday 03 February 2026 04:36:33 +0000 (0:00:03.496) 0:01:09.872 ****** 2026-02-03 04:37:14.167135 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:37:14.167147 | orchestrator | 2026-02-03 04:37:14.167159 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-03 04:37:14.167170 | orchestrator | Tuesday 03 February 2026 04:36:35 +0000 (0:00:02.176) 0:01:12.048 ****** 2026-02-03 04:37:14.167181 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:37:14.167192 | orchestrator | 2026-02-03 04:37:14.167205 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-03 04:37:14.167218 | orchestrator | Tuesday 03 February 2026 04:36:37 +0000 (0:00:02.289) 0:01:14.338 ****** 2026-02-03 04:37:14.167230 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:37:14.167242 | orchestrator | 2026-02-03 04:37:14.167254 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-03 04:37:14.167266 | orchestrator | Tuesday 03 February 2026 04:37:13 +0000 (0:00:36.083) 0:01:50.421 ****** 2026-02-03 04:37:14.167279 | orchestrator | 2026-02-03 04:37:14.167301 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-03 04:38:03.494592 | orchestrator | Tuesday 03 February 2026 04:37:13 +0000 (0:00:00.087) 0:01:50.509 ****** 2026-02-03 04:38:03.494721 | orchestrator | 2026-02-03 04:38:03.494738 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-03 04:38:03.494750 | orchestrator | Tuesday 03 February 2026 04:37:14 +0000 (0:00:00.075) 0:01:50.584 ****** 2026-02-03 04:38:03.494764 | orchestrator | 2026-02-03 04:38:03.494780 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-03 04:38:03.494797 | orchestrator | Tuesday 03 February 2026 04:37:14 +0000 (0:00:00.072) 0:01:50.657 ****** 2026-02-03 04:38:03.494813 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:38:03.494833 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:38:03.494849 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:38:03.494866 | orchestrator | 2026-02-03 04:38:03.494877 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-03 04:38:03.494887 | orchestrator | Tuesday 03 February 2026 04:37:24 +0000 (0:00:10.263) 0:02:00.920 ****** 2026-02-03 04:38:03.494896 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:38:03.494906 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:38:03.494943 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:38:03.494953 | orchestrator | 2026-02-03 04:38:03.494963 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-03 04:38:03.494973 | orchestrator | Tuesday 03 February 2026 04:37:35 +0000 (0:00:10.911) 0:02:11.832 ****** 2026-02-03 04:38:03.494983 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:38:03.495039 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:38:03.495049 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:38:03.495059 | orchestrator | 2026-02-03 04:38:03.495068 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-03 04:38:03.495079 | orchestrator | Tuesday 03 February 2026 04:37:45 +0000 (0:00:10.579) 0:02:22.412 ****** 2026-02-03 04:38:03.495088 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:38:03.495097 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:38:03.495107 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:38:03.495117 | orchestrator | 2026-02-03 04:38:03.495127 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:38:03.495138 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 04:38:03.495151 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 04:38:03.495163 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 04:38:03.495174 | orchestrator | 2026-02-03 04:38:03.495186 | orchestrator | 2026-02-03 04:38:03.495197 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:38:03.495209 | orchestrator | Tuesday 03 February 2026 04:38:02 +0000 (0:00:17.092) 0:02:39.504 ****** 2026-02-03 04:38:03.495221 | orchestrator | =============================================================================== 2026-02-03 04:38:03.495233 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 36.08s 2026-02-03 04:38:03.495245 | orchestrator | manila : Restart manila-share container -------------------------------- 17.09s 2026-02-03 04:38:03.495256 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.64s 2026-02-03 04:38:03.495267 | orchestrator | manila : Restart manila-data container --------------------------------- 10.91s 2026-02-03 04:38:03.495293 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.58s 2026-02-03 04:38:03.495305 | orchestrator | manila : Restart manila-api container ---------------------------------- 10.26s 2026-02-03 04:38:03.495316 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.81s 2026-02-03 04:38:03.495327 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.42s 2026-02-03 04:38:03.495339 | orchestrator | manila : Copying over config.json files for services -------------------- 4.61s 2026-02-03 04:38:03.495353 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.24s 2026-02-03 04:38:03.495366 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.81s 2026-02-03 04:38:03.495378 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.77s 2026-02-03 04:38:03.495392 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.76s 2026-02-03 04:38:03.495405 | orchestrator | manila : Check manila containers ---------------------------------------- 3.50s 2026-02-03 04:38:03.495418 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.22s 2026-02-03 04:38:03.495431 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.20s 2026-02-03 04:38:03.495444 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.34s 2026-02-03 04:38:03.495457 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.29s 2026-02-03 04:38:03.495469 | orchestrator | manila : Creating Manila database --------------------------------------- 2.18s 2026-02-03 04:38:03.495491 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.90s 2026-02-03 04:38:03.861810 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-03 04:38:16.156720 | orchestrator | 2026-02-03 04:38:16 | INFO  | Task 91e6b07c-f1b2-4f0b-b3e0-0d7705cf6ee0 (netdata) was prepared for execution. 2026-02-03 04:38:16.156825 | orchestrator | 2026-02-03 04:38:16 | INFO  | It takes a moment until task 91e6b07c-f1b2-4f0b-b3e0-0d7705cf6ee0 (netdata) has been started and output is visible here. 2026-02-03 04:39:52.473518 | orchestrator | 2026-02-03 04:39:52.473624 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:39:52.473639 | orchestrator | 2026-02-03 04:39:52.473652 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:39:52.473663 | orchestrator | Tuesday 03 February 2026 04:38:20 +0000 (0:00:00.247) 0:00:00.247 ****** 2026-02-03 04:39:52.473675 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-03 04:39:52.473687 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-03 04:39:52.473698 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-03 04:39:52.473708 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-03 04:39:52.473719 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-03 04:39:52.473730 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-03 04:39:52.473741 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-03 04:39:52.473752 | orchestrator | 2026-02-03 04:39:52.473762 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-03 04:39:52.473773 | orchestrator | 2026-02-03 04:39:52.473784 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-03 04:39:52.473795 | orchestrator | Tuesday 03 February 2026 04:38:21 +0000 (0:00:00.934) 0:00:01.182 ****** 2026-02-03 04:39:52.473808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:39:52.473822 | orchestrator | 2026-02-03 04:39:52.473833 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-03 04:39:52.473844 | orchestrator | Tuesday 03 February 2026 04:38:23 +0000 (0:00:01.379) 0:00:02.561 ****** 2026-02-03 04:39:52.473855 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:39:52.473868 | orchestrator | ok: [testbed-manager] 2026-02-03 04:39:52.473879 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:39:52.473890 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:39:52.473901 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:39:52.473912 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:39:52.473922 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:39:52.473933 | orchestrator | 2026-02-03 04:39:52.473944 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-03 04:39:52.473955 | orchestrator | Tuesday 03 February 2026 04:38:25 +0000 (0:00:01.939) 0:00:04.501 ****** 2026-02-03 04:39:52.473966 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:39:52.473977 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:39:52.473988 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:39:52.473999 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:39:52.474010 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:39:52.474088 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:39:52.474134 | orchestrator | ok: [testbed-manager] 2026-02-03 04:39:52.474149 | orchestrator | 2026-02-03 04:39:52.474162 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-03 04:39:52.474175 | orchestrator | Tuesday 03 February 2026 04:38:27 +0000 (0:00:02.642) 0:00:07.143 ****** 2026-02-03 04:39:52.474218 | orchestrator | changed: [testbed-manager] 2026-02-03 04:39:52.474238 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:39:52.474256 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:39:52.474306 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:39:52.474326 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:39:52.474343 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:39:52.474357 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:39:52.474368 | orchestrator | 2026-02-03 04:39:52.474394 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-03 04:39:52.474405 | orchestrator | Tuesday 03 February 2026 04:38:29 +0000 (0:00:01.645) 0:00:08.789 ****** 2026-02-03 04:39:52.474416 | orchestrator | changed: [testbed-manager] 2026-02-03 04:39:52.474427 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:39:52.474437 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:39:52.474448 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:39:52.474458 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:39:52.474469 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:39:52.474479 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:39:52.474490 | orchestrator | 2026-02-03 04:39:52.474501 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-03 04:39:52.474512 | orchestrator | Tuesday 03 February 2026 04:38:45 +0000 (0:00:16.086) 0:00:24.876 ****** 2026-02-03 04:39:52.474522 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:39:52.474533 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:39:52.474544 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:39:52.474555 | orchestrator | changed: [testbed-manager] 2026-02-03 04:39:52.474566 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:39:52.474576 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:39:52.474587 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:39:52.474597 | orchestrator | 2026-02-03 04:39:52.474608 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-03 04:39:52.474619 | orchestrator | Tuesday 03 February 2026 04:39:25 +0000 (0:00:39.917) 0:01:04.793 ****** 2026-02-03 04:39:52.474631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:39:52.474644 | orchestrator | 2026-02-03 04:39:52.474655 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-03 04:39:52.474666 | orchestrator | Tuesday 03 February 2026 04:39:27 +0000 (0:00:01.946) 0:01:06.740 ****** 2026-02-03 04:39:52.474677 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-03 04:39:52.474689 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-03 04:39:52.474699 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-03 04:39:52.474710 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-03 04:39:52.474740 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-03 04:39:52.474752 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-03 04:39:52.474763 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-03 04:39:52.474774 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-03 04:39:52.474784 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-03 04:39:52.474795 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-03 04:39:52.474806 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-03 04:39:52.474817 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-03 04:39:52.474827 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-03 04:39:52.474838 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-03 04:39:52.474849 | orchestrator | 2026-02-03 04:39:52.474859 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-03 04:39:52.474871 | orchestrator | Tuesday 03 February 2026 04:39:31 +0000 (0:00:03.826) 0:01:10.567 ****** 2026-02-03 04:39:52.474882 | orchestrator | ok: [testbed-manager] 2026-02-03 04:39:52.474893 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:39:52.474912 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:39:52.474923 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:39:52.474934 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:39:52.474945 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:39:52.474955 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:39:52.474985 | orchestrator | 2026-02-03 04:39:52.474997 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-03 04:39:52.475020 | orchestrator | Tuesday 03 February 2026 04:39:32 +0000 (0:00:01.356) 0:01:11.923 ****** 2026-02-03 04:39:52.475031 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:39:52.475042 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:39:52.475053 | orchestrator | changed: [testbed-manager] 2026-02-03 04:39:52.475064 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:39:52.475075 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:39:52.475086 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:39:52.475097 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:39:52.475108 | orchestrator | 2026-02-03 04:39:52.475119 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-03 04:39:52.475130 | orchestrator | Tuesday 03 February 2026 04:39:33 +0000 (0:00:01.335) 0:01:13.258 ****** 2026-02-03 04:39:52.475141 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:39:52.475152 | orchestrator | ok: [testbed-manager] 2026-02-03 04:39:52.475163 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:39:52.475174 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:39:52.475208 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:39:52.475223 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:39:52.475234 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:39:52.475245 | orchestrator | 2026-02-03 04:39:52.475256 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-03 04:39:52.475267 | orchestrator | Tuesday 03 February 2026 04:39:35 +0000 (0:00:01.309) 0:01:14.568 ****** 2026-02-03 04:39:52.475278 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:39:52.475289 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:39:52.475300 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:39:52.475311 | orchestrator | ok: [testbed-manager] 2026-02-03 04:39:52.475321 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:39:52.475332 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:39:52.475343 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:39:52.475354 | orchestrator | 2026-02-03 04:39:52.475365 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-03 04:39:52.475376 | orchestrator | Tuesday 03 February 2026 04:39:36 +0000 (0:00:01.721) 0:01:16.289 ****** 2026-02-03 04:39:52.475401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-03 04:39:52.475420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:39:52.475470 | orchestrator | 2026-02-03 04:39:52.475488 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-03 04:39:52.475504 | orchestrator | Tuesday 03 February 2026 04:39:38 +0000 (0:00:01.543) 0:01:17.833 ****** 2026-02-03 04:39:52.475521 | orchestrator | changed: [testbed-manager] 2026-02-03 04:39:52.475537 | orchestrator | 2026-02-03 04:39:52.475554 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-03 04:39:52.475573 | orchestrator | Tuesday 03 February 2026 04:39:40 +0000 (0:00:02.331) 0:01:20.164 ****** 2026-02-03 04:39:52.475592 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:39:52.475610 | orchestrator | changed: [testbed-manager] 2026-02-03 04:39:52.475629 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:39:52.475641 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:39:52.475652 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:39:52.475663 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:39:52.475674 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:39:52.475696 | orchestrator | 2026-02-03 04:39:52.475707 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:39:52.475718 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:39:52.475730 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:39:52.475742 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:39:52.475752 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:39:52.475774 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:39:52.965678 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:39:52.965773 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:39:52.965788 | orchestrator | 2026-02-03 04:39:52.965802 | orchestrator | 2026-02-03 04:39:52.965814 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:39:52.965827 | orchestrator | Tuesday 03 February 2026 04:39:52 +0000 (0:00:11.726) 0:01:31.890 ****** 2026-02-03 04:39:52.965838 | orchestrator | =============================================================================== 2026-02-03 04:39:52.965849 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.92s 2026-02-03 04:39:52.965860 | orchestrator | osism.services.netdata : Add repository -------------------------------- 16.09s 2026-02-03 04:39:52.965871 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.73s 2026-02-03 04:39:52.965882 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.83s 2026-02-03 04:39:52.965893 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.64s 2026-02-03 04:39:52.965904 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.33s 2026-02-03 04:39:52.965915 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.95s 2026-02-03 04:39:52.965925 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.94s 2026-02-03 04:39:52.965936 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.72s 2026-02-03 04:39:52.965947 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.65s 2026-02-03 04:39:52.965958 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.54s 2026-02-03 04:39:52.965968 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.38s 2026-02-03 04:39:52.965979 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.36s 2026-02-03 04:39:52.965991 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.34s 2026-02-03 04:39:52.966002 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.31s 2026-02-03 04:39:52.966013 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2026-02-03 04:39:57.090591 | orchestrator | 2026-02-03 04:39:57 | INFO  | Task 07b605f1-3309-4997-a6a4-a927b3e61678 (prometheus) was prepared for execution. 2026-02-03 04:39:57.090670 | orchestrator | 2026-02-03 04:39:57 | INFO  | It takes a moment until task 07b605f1-3309-4997-a6a4-a927b3e61678 (prometheus) has been started and output is visible here. 2026-02-03 04:40:07.421929 | orchestrator | 2026-02-03 04:40:07.422152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:40:07.422272 | orchestrator | 2026-02-03 04:40:07.422299 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:40:07.422340 | orchestrator | Tuesday 03 February 2026 04:40:01 +0000 (0:00:00.310) 0:00:00.310 ****** 2026-02-03 04:40:07.422363 | orchestrator | ok: [testbed-manager] 2026-02-03 04:40:07.422384 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:40:07.422404 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:40:07.422425 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:40:07.422446 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:40:07.422470 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:40:07.422493 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:40:07.422514 | orchestrator | 2026-02-03 04:40:07.422537 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:40:07.422676 | orchestrator | Tuesday 03 February 2026 04:40:02 +0000 (0:00:00.967) 0:00:01.278 ****** 2026-02-03 04:40:07.422699 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-03 04:40:07.422719 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-03 04:40:07.422739 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-03 04:40:07.422759 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-03 04:40:07.422779 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-03 04:40:07.422798 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-03 04:40:07.422818 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-03 04:40:07.422837 | orchestrator | 2026-02-03 04:40:07.422857 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-03 04:40:07.422876 | orchestrator | 2026-02-03 04:40:07.422896 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-03 04:40:07.422916 | orchestrator | Tuesday 03 February 2026 04:40:03 +0000 (0:00:01.083) 0:00:02.361 ****** 2026-02-03 04:40:07.422937 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:40:07.422958 | orchestrator | 2026-02-03 04:40:07.422979 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-03 04:40:07.422998 | orchestrator | Tuesday 03 February 2026 04:40:05 +0000 (0:00:01.448) 0:00:03.810 ****** 2026-02-03 04:40:07.423022 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-03 04:40:07.423049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:07.423070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:07.423106 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:07.423164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:07.423186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:07.423237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:07.423260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:07.423280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:07.423302 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:07.423336 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:07.423371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:08.335679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:08.335775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:08.335791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:08.335805 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-03 04:40:08.335819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:08.335850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:08.335878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:08.335897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:40:08.335908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:08.335918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:08.335928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:40:08.335938 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:08.335956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:08.335966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:08.335988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:13.699742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:40:13.699853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:13.699870 | orchestrator | 2026-02-03 04:40:13.699885 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-03 04:40:13.699899 | orchestrator | Tuesday 03 February 2026 04:40:08 +0000 (0:00:02.921) 0:00:06.731 ****** 2026-02-03 04:40:13.699911 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 04:40:13.699923 | orchestrator | 2026-02-03 04:40:13.699935 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-03 04:40:13.699946 | orchestrator | Tuesday 03 February 2026 04:40:10 +0000 (0:00:01.731) 0:00:08.462 ****** 2026-02-03 04:40:13.699959 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-03 04:40:13.699995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:13.700009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:13.700020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:13.700065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:13.700079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:13.700090 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:13.700103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:13.700124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:13.700136 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:13.700147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:13.700165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:13.700186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:15.672120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:15.672255 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:15.672300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:40:15.672315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:15.672328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:15.672339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:40:15.672365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:15.672396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:40:15.672411 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-03 04:40:15.672434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:15.672446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:15.672457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:15.672469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:15.672481 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:15.672501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:16.745076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:16.745189 | orchestrator | 2026-02-03 04:40:16.745205 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-03 04:40:16.745283 | orchestrator | Tuesday 03 February 2026 04:40:15 +0000 (0:00:05.600) 0:00:14.063 ****** 2026-02-03 04:40:16.745305 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-03 04:40:16.745317 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:16.745327 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:16.745374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:16.745389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:16.745419 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-03 04:40:16.745439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:16.745450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:16.745459 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:16.745469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:16.745478 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:40:16.745489 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:40:16.745503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:16.745512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:16.745529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:17.124061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:17.124161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:17.124177 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:40:17.124191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:17.124202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:17.124212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:17.124303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:17.124315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:17.124345 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:40:17.124373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:17.124384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:17.124395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 04:40:17.124405 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:40:17.124415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:17.124426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:17.124441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 04:40:17.124451 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:40:17.124461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:17.124485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:18.160901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 04:40:18.160995 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:40:18.161009 | orchestrator | 2026-02-03 04:40:18.161020 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-03 04:40:18.161029 | orchestrator | Tuesday 03 February 2026 04:40:17 +0000 (0:00:01.573) 0:00:15.637 ****** 2026-02-03 04:40:18.161039 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-03 04:40:18.161049 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:18.161058 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:18.161084 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-03 04:40:18.161131 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:18.161142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:18.161151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:18.161159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:18.161168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:18.161177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:18.161197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:18.161205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:18.161276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:19.627349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:19.627460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:19.627477 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:40:19.627491 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:40:19.627502 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:40:19.627513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:19.627526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:19.627561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:19.627619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:19.627631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 04:40:19.627643 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:40:19.627675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:19.627687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:19.627699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 04:40:19.627710 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:40:19.627721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:19.627741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:19.627757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 04:40:19.627769 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:40:19.627780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 04:40:19.627799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 04:40:23.748540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 04:40:23.749453 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:40:23.749486 | orchestrator | 2026-02-03 04:40:23.749498 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-03 04:40:23.749510 | orchestrator | Tuesday 03 February 2026 04:40:19 +0000 (0:00:02.385) 0:00:18.023 ****** 2026-02-03 04:40:23.749522 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-03 04:40:23.749535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:23.749569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:23.749595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:23.749605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:23.749634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:23.749645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:23.749656 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:40:23.749666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:23.749683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:23.749694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:23.749709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:23.749719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:23.749738 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:26.687644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:26.687758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:26.687800 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:40:26.687814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:26.687841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:26.687853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:40:26.687865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:40:26.687896 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-03 04:40:26.687911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:26.687931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:26.687943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:40:26.687960 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:26.687972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:26.687984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:26.688004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:40:30.736049 | orchestrator | 2026-02-03 04:40:30.736152 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-03 04:40:30.736169 | orchestrator | Tuesday 03 February 2026 04:40:26 +0000 (0:00:07.045) 0:00:25.069 ****** 2026-02-03 04:40:30.736182 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 04:40:30.736222 | orchestrator | 2026-02-03 04:40:30.736264 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-03 04:40:30.736277 | orchestrator | Tuesday 03 February 2026 04:40:27 +0000 (0:00:00.939) 0:00:26.009 ****** 2026-02-03 04:40:30.736290 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0254023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736305 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0254023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736317 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0254023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:40:30.736343 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0254023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736356 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084580, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0413013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736369 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084580, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0413013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736399 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0254023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736421 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0254023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736433 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084515, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736445 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0254023, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736475 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084580, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0413013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736488 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084580, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0413013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736499 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084561, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0385332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:30.736526 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084580, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0413013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551078 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084515, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551212 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084515, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551231 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084580, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0413013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551296 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084509, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0225947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551310 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084515, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551322 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084561, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0385332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551358 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084515, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551389 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084561, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0385332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551402 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0258677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551413 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084515, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551430 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084509, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0225947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551442 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0352604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551454 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084580, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0413013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:40:32.551473 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084561, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0385332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:32.551492 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084561, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0385332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371554 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084509, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0225947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371656 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084561, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0385332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371687 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0258677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371698 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084509, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0225947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371709 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084529, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0263665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371740 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0258677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371751 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084509, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0225947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371777 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084509, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0225947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371789 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0352604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371804 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0352604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371816 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084520, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084529, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0263665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371846 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084529, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0263665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371856 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0258677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:34.371873 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0258677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488611 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0258677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488721 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084520, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488733 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0352604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488758 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084515, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:40:36.488765 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084576, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0402718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488772 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084520, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488779 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0352604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488799 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084576, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0402718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488809 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084529, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0263665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488816 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0352604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488826 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084505, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0219696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488833 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084520, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488838 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084576, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0402718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:36.488844 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084561, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0385332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:40:36.488856 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084529, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0263665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.383913 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084505, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0219696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384047 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084529, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0263665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384065 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084505, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0219696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384077 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084612, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.048199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384089 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084576, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0402718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384101 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084520, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384112 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084612, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.048199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084572, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.039949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384181 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084505, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0219696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384193 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084520, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384205 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084576, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0402718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384216 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084612, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.048199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384228 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084509, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0225947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:40:38.384239 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084505, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0219696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:38.384311 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084572, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.039949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683706 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0233161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683811 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084612, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.048199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683828 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084576, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0402718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683840 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084612, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.048199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683852 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084572, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.039949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683866 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084572, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.039949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683895 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084508, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0222373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683948 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0233161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683962 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0233161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683974 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084572, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.039949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683986 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084505, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0219696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.683998 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0233161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.684009 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084508, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0222373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.684035 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084537, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0330834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:40.684055 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0258677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:40:42.383358 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084508, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0222373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383447 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084537, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0330834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383460 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0233161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383470 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084508, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0222373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383478 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084612, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.048199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383526 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0275729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383535 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084508, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0222373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383559 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0275729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383568 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084537, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0330834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383576 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084572, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.039949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383586 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084537, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0330834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383594 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084537, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0330834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383612 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0275729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383621 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084604, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.047302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:42.383636 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0352604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:40:49.458498 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:40:49.458593 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0275729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458607 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084604, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.047302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458615 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:40:49.458622 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0275729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458647 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084604, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.047302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458654 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:40:49.458671 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0233161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458678 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084604, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.047302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458685 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:40:49.458704 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084604, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.047302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458712 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084508, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0222373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458719 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:40:49.458726 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084537, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0330834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458732 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084529, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0263665, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:40:49.458744 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0275729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458755 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084604, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.047302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-03 04:40:49.458761 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:40:49.458768 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084520, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.024316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:40:49.458779 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084576, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0402718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:41:16.703876 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084505, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0219696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:41:16.703992 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084612, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.048199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:41:16.704048 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084572, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.039949, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:41:16.704070 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0233161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:41:16.704107 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084508, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0222373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:41:16.704128 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084537, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0330834, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:41:16.704147 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0275729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:41:16.704177 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084604, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.047302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-03 04:41:16.704189 | orchestrator | 2026-02-03 04:41:16.704203 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-03 04:41:16.704215 | orchestrator | Tuesday 03 February 2026 04:40:56 +0000 (0:00:28.420) 0:00:54.429 ****** 2026-02-03 04:41:16.704225 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 04:41:16.704244 | orchestrator | 2026-02-03 04:41:16.704255 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-03 04:41:16.704264 | orchestrator | Tuesday 03 February 2026 04:40:56 +0000 (0:00:00.814) 0:00:55.244 ****** 2026-02-03 04:41:16.704274 | orchestrator | [WARNING]: Skipped 2026-02-03 04:41:16.704286 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704297 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-03 04:41:16.704350 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704363 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-03 04:41:16.704374 | orchestrator | [WARNING]: Skipped 2026-02-03 04:41:16.704391 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704408 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-03 04:41:16.704437 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704456 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-03 04:41:16.704473 | orchestrator | [WARNING]: Skipped 2026-02-03 04:41:16.704488 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704500 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-03 04:41:16.704512 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704523 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-03 04:41:16.704534 | orchestrator | [WARNING]: Skipped 2026-02-03 04:41:16.704545 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704557 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-03 04:41:16.704569 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704580 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-03 04:41:16.704592 | orchestrator | [WARNING]: Skipped 2026-02-03 04:41:16.704603 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704614 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-03 04:41:16.704625 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704637 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-03 04:41:16.704648 | orchestrator | [WARNING]: Skipped 2026-02-03 04:41:16.704660 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704671 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-03 04:41:16.704690 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704701 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-03 04:41:16.704713 | orchestrator | [WARNING]: Skipped 2026-02-03 04:41:16.704724 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704736 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-03 04:41:16.704747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-03 04:41:16.704758 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-03 04:41:16.704769 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:41:16.704779 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 04:41:16.704789 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-03 04:41:16.704798 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-03 04:41:16.704808 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-03 04:41:16.704818 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-03 04:41:16.704827 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-03 04:41:16.704844 | orchestrator | 2026-02-03 04:41:16.704854 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-03 04:41:16.704864 | orchestrator | Tuesday 03 February 2026 04:40:58 +0000 (0:00:02.043) 0:00:57.288 ****** 2026-02-03 04:41:16.704874 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-03 04:41:16.704885 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-03 04:41:16.704894 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:41:16.704904 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:41:16.704914 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-03 04:41:16.704924 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:41:16.704942 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-03 04:41:34.899745 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:41:34.899862 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-03 04:41:34.899890 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:41:34.899909 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-03 04:41:34.899927 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:41:34.899946 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-03 04:41:34.899966 | orchestrator | 2026-02-03 04:41:34.900023 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-03 04:41:34.900043 | orchestrator | Tuesday 03 February 2026 04:41:16 +0000 (0:00:17.814) 0:01:15.102 ****** 2026-02-03 04:41:34.900061 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-03 04:41:34.900080 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:41:34.900098 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-03 04:41:34.900117 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:41:34.900135 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-03 04:41:34.900154 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:41:34.900172 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-03 04:41:34.900190 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:41:34.900209 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-03 04:41:34.900227 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:41:34.900246 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-03 04:41:34.900266 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:41:34.900288 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-03 04:41:34.900309 | orchestrator | 2026-02-03 04:41:34.900330 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-03 04:41:34.900380 | orchestrator | Tuesday 03 February 2026 04:41:19 +0000 (0:00:03.004) 0:01:18.107 ****** 2026-02-03 04:41:34.900402 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-03 04:41:34.900424 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:41:34.900445 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-03 04:41:34.900464 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-03 04:41:34.900483 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:41:34.900501 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:41:34.900553 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-03 04:41:34.900572 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:41:34.900589 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-03 04:41:34.900627 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-03 04:41:34.900646 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:41:34.900664 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-03 04:41:34.900679 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:41:34.900695 | orchestrator | 2026-02-03 04:41:34.900710 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-03 04:41:34.900729 | orchestrator | Tuesday 03 February 2026 04:41:21 +0000 (0:00:02.272) 0:01:20.379 ****** 2026-02-03 04:41:34.900746 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 04:41:34.900762 | orchestrator | 2026-02-03 04:41:34.900778 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-03 04:41:34.900795 | orchestrator | Tuesday 03 February 2026 04:41:22 +0000 (0:00:00.786) 0:01:21.166 ****** 2026-02-03 04:41:34.900811 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:41:34.900827 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:41:34.900843 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:41:34.900859 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:41:34.900876 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:41:34.900892 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:41:34.900908 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:41:34.900924 | orchestrator | 2026-02-03 04:41:34.900940 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-03 04:41:34.900956 | orchestrator | Tuesday 03 February 2026 04:41:23 +0000 (0:00:00.769) 0:01:21.936 ****** 2026-02-03 04:41:34.900971 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:41:34.900987 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:41:34.901004 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:41:34.901020 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:41:34.901037 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:41:34.901053 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:41:34.901068 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:41:34.901085 | orchestrator | 2026-02-03 04:41:34.901101 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-03 04:41:34.901145 | orchestrator | Tuesday 03 February 2026 04:41:25 +0000 (0:00:02.173) 0:01:24.109 ****** 2026-02-03 04:41:34.901163 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-03 04:41:34.901180 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-03 04:41:34.901195 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-03 04:41:34.901212 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:41:34.901228 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-03 04:41:34.901244 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-03 04:41:34.901260 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:41:34.901275 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:41:34.901290 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:41:34.901307 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:41:34.901323 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-03 04:41:34.901373 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:41:34.901408 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-03 04:41:34.901424 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:41:34.901441 | orchestrator | 2026-02-03 04:41:34.901458 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-03 04:41:34.901474 | orchestrator | Tuesday 03 February 2026 04:41:27 +0000 (0:00:01.696) 0:01:25.806 ****** 2026-02-03 04:41:34.901492 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-03 04:41:34.901511 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:41:34.901530 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-03 04:41:34.901548 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:41:34.901565 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-03 04:41:34.901583 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:41:34.901629 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-03 04:41:34.901647 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:41:34.901664 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-03 04:41:34.901682 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:41:34.901700 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-03 04:41:34.901718 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:41:34.901736 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-03 04:41:34.901754 | orchestrator | 2026-02-03 04:41:34.901773 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-03 04:41:34.901790 | orchestrator | Tuesday 03 February 2026 04:41:28 +0000 (0:00:01.583) 0:01:27.389 ****** 2026-02-03 04:41:34.901808 | orchestrator | [WARNING]: Skipped 2026-02-03 04:41:34.901828 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-03 04:41:34.901847 | orchestrator | due to this access issue: 2026-02-03 04:41:34.901878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-03 04:41:34.901897 | orchestrator | not a directory 2026-02-03 04:41:34.901915 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 04:41:34.901932 | orchestrator | 2026-02-03 04:41:34.901950 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-03 04:41:34.901969 | orchestrator | Tuesday 03 February 2026 04:41:30 +0000 (0:00:01.215) 0:01:28.605 ****** 2026-02-03 04:41:34.901987 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:41:34.902006 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:41:34.902105 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:41:34.902121 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:41:34.902131 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:41:34.902141 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:41:34.902150 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:41:34.902160 | orchestrator | 2026-02-03 04:41:34.902170 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-03 04:41:34.902180 | orchestrator | Tuesday 03 February 2026 04:41:31 +0000 (0:00:01.062) 0:01:29.667 ****** 2026-02-03 04:41:34.902190 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:41:34.902199 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:41:34.902209 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:41:34.902219 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:41:34.902228 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:41:34.902238 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:41:34.902247 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:41:34.902269 | orchestrator | 2026-02-03 04:41:34.902279 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-03 04:41:34.902289 | orchestrator | Tuesday 03 February 2026 04:41:32 +0000 (0:00:01.017) 0:01:30.685 ****** 2026-02-03 04:41:34.902320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:41:36.613652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:41:36.613745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:41:36.613761 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-03 04:41:36.613773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:41:36.613798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:41:36.613808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:41:36.613838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:36.613864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:36.613874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:36.613883 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-03 04:41:36.613893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:41:36.613908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:41:36.613918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:41:36.613943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:36.613954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:36.613970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:38.667825 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:41:38.667900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:41:38.667908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:41:38.667924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-03 04:41:38.667928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:41:38.667947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:41:38.667963 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-03 04:41:38.667968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-03 04:41:38.667972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:38.667976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:38.667984 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:38.667992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 04:41:38.667996 | orchestrator | 2026-02-03 04:41:38.668001 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-03 04:41:38.668006 | orchestrator | Tuesday 03 February 2026 04:41:36 +0000 (0:00:04.329) 0:01:35.015 ****** 2026-02-03 04:41:38.668010 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-03 04:41:38.668015 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:41:38.668019 | orchestrator | 2026-02-03 04:41:38.668023 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-03 04:41:38.668027 | orchestrator | Tuesday 03 February 2026 04:41:37 +0000 (0:00:01.297) 0:01:36.312 ****** 2026-02-03 04:41:38.668031 | orchestrator | 2026-02-03 04:41:38.668035 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-03 04:41:38.668039 | orchestrator | Tuesday 03 February 2026 04:41:38 +0000 (0:00:00.259) 0:01:36.572 ****** 2026-02-03 04:41:38.668042 | orchestrator | 2026-02-03 04:41:38.668046 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-03 04:41:38.668050 | orchestrator | Tuesday 03 February 2026 04:41:38 +0000 (0:00:00.074) 0:01:36.646 ****** 2026-02-03 04:41:38.668054 | orchestrator | 2026-02-03 04:41:38.668057 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-03 04:41:38.668061 | orchestrator | Tuesday 03 February 2026 04:41:38 +0000 (0:00:00.079) 0:01:36.726 ****** 2026-02-03 04:41:38.668065 | orchestrator | 2026-02-03 04:41:38.668069 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-03 04:41:38.668072 | orchestrator | Tuesday 03 February 2026 04:41:38 +0000 (0:00:00.081) 0:01:36.808 ****** 2026-02-03 04:41:38.668076 | orchestrator | 2026-02-03 04:41:38.668080 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-03 04:41:38.668084 | orchestrator | Tuesday 03 February 2026 04:41:38 +0000 (0:00:00.077) 0:01:36.885 ****** 2026-02-03 04:41:38.668088 | orchestrator | 2026-02-03 04:41:38.668094 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-03 04:43:24.677834 | orchestrator | Tuesday 03 February 2026 04:41:38 +0000 (0:00:00.067) 0:01:36.953 ****** 2026-02-03 04:43:24.677958 | orchestrator | 2026-02-03 04:43:24.677980 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-03 04:43:24.677992 | orchestrator | Tuesday 03 February 2026 04:41:38 +0000 (0:00:00.095) 0:01:37.048 ****** 2026-02-03 04:43:24.678003 | orchestrator | changed: [testbed-manager] 2026-02-03 04:43:24.678073 | orchestrator | 2026-02-03 04:43:24.678090 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-03 04:43:24.678102 | orchestrator | Tuesday 03 February 2026 04:42:05 +0000 (0:00:26.410) 0:02:03.459 ****** 2026-02-03 04:43:24.678115 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:43:24.678126 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:43:24.678146 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:43:24.678158 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:43:24.678169 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:43:24.678180 | orchestrator | changed: [testbed-manager] 2026-02-03 04:43:24.678191 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:43:24.678202 | orchestrator | 2026-02-03 04:43:24.678213 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-03 04:43:24.678250 | orchestrator | Tuesday 03 February 2026 04:42:17 +0000 (0:00:12.640) 0:02:16.099 ****** 2026-02-03 04:43:24.678258 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:43:24.678264 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:43:24.678271 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:43:24.678277 | orchestrator | 2026-02-03 04:43:24.678283 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-03 04:43:24.678291 | orchestrator | Tuesday 03 February 2026 04:42:23 +0000 (0:00:05.810) 0:02:21.909 ****** 2026-02-03 04:43:24.678297 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:43:24.678303 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:43:24.678310 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:43:24.678316 | orchestrator | 2026-02-03 04:43:24.678322 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-03 04:43:24.678329 | orchestrator | Tuesday 03 February 2026 04:42:34 +0000 (0:00:10.914) 0:02:32.824 ****** 2026-02-03 04:43:24.678335 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:43:24.678341 | orchestrator | changed: [testbed-manager] 2026-02-03 04:43:24.678347 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:43:24.678353 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:43:24.678360 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:43:24.678367 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:43:24.678374 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:43:24.678381 | orchestrator | 2026-02-03 04:43:24.678389 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-03 04:43:24.678396 | orchestrator | Tuesday 03 February 2026 04:42:48 +0000 (0:00:14.518) 0:02:47.342 ****** 2026-02-03 04:43:24.678515 | orchestrator | changed: [testbed-manager] 2026-02-03 04:43:24.678531 | orchestrator | 2026-02-03 04:43:24.678541 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-03 04:43:24.678551 | orchestrator | Tuesday 03 February 2026 04:42:57 +0000 (0:00:08.361) 0:02:55.704 ****** 2026-02-03 04:43:24.678561 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:43:24.678572 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:43:24.678582 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:43:24.678593 | orchestrator | 2026-02-03 04:43:24.678604 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-03 04:43:24.678615 | orchestrator | Tuesday 03 February 2026 04:43:07 +0000 (0:00:10.371) 0:03:06.075 ****** 2026-02-03 04:43:24.678623 | orchestrator | changed: [testbed-manager] 2026-02-03 04:43:24.678630 | orchestrator | 2026-02-03 04:43:24.678638 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-03 04:43:24.678645 | orchestrator | Tuesday 03 February 2026 04:43:13 +0000 (0:00:05.859) 0:03:11.935 ****** 2026-02-03 04:43:24.678653 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:43:24.678660 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:43:24.678667 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:43:24.678674 | orchestrator | 2026-02-03 04:43:24.678681 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:43:24.678690 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-03 04:43:24.678699 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-03 04:43:24.678707 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-03 04:43:24.678714 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-03 04:43:24.678722 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-03 04:43:24.678738 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-03 04:43:24.678745 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-03 04:43:24.678751 | orchestrator | 2026-02-03 04:43:24.678758 | orchestrator | 2026-02-03 04:43:24.678764 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:43:24.678770 | orchestrator | Tuesday 03 February 2026 04:43:24 +0000 (0:00:10.529) 0:03:22.465 ****** 2026-02-03 04:43:24.678777 | orchestrator | =============================================================================== 2026-02-03 04:43:24.678802 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.42s 2026-02-03 04:43:24.678814 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 26.41s 2026-02-03 04:43:24.678824 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.81s 2026-02-03 04:43:24.678833 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.52s 2026-02-03 04:43:24.678839 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.64s 2026-02-03 04:43:24.678845 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.91s 2026-02-03 04:43:24.678851 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.53s 2026-02-03 04:43:24.678858 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.37s 2026-02-03 04:43:24.678864 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.36s 2026-02-03 04:43:24.678870 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.05s 2026-02-03 04:43:24.678876 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.86s 2026-02-03 04:43:24.678882 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.81s 2026-02-03 04:43:24.678888 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.60s 2026-02-03 04:43:24.678895 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.33s 2026-02-03 04:43:24.678901 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.00s 2026-02-03 04:43:24.678907 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.92s 2026-02-03 04:43:24.678913 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.39s 2026-02-03 04:43:24.678919 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.27s 2026-02-03 04:43:24.678925 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.17s 2026-02-03 04:43:24.678932 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.04s 2026-02-03 04:43:29.138726 | orchestrator | 2026-02-03 04:43:29 | INFO  | Task ed610cf0-f2d9-4370-ba3d-a0ced00b56a9 (grafana) was prepared for execution. 2026-02-03 04:43:29.138846 | orchestrator | 2026-02-03 04:43:29 | INFO  | It takes a moment until task ed610cf0-f2d9-4370-ba3d-a0ced00b56a9 (grafana) has been started and output is visible here. 2026-02-03 04:43:39.760228 | orchestrator | 2026-02-03 04:43:39.760309 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:43:39.760316 | orchestrator | 2026-02-03 04:43:39.760321 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:43:39.760326 | orchestrator | Tuesday 03 February 2026 04:43:33 +0000 (0:00:00.267) 0:00:00.267 ****** 2026-02-03 04:43:39.760331 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:43:39.760336 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:43:39.760340 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:43:39.760344 | orchestrator | 2026-02-03 04:43:39.760348 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:43:39.760368 | orchestrator | Tuesday 03 February 2026 04:43:34 +0000 (0:00:00.350) 0:00:00.618 ****** 2026-02-03 04:43:39.760373 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-03 04:43:39.760377 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-03 04:43:39.760381 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-03 04:43:39.760385 | orchestrator | 2026-02-03 04:43:39.760389 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-03 04:43:39.760393 | orchestrator | 2026-02-03 04:43:39.760397 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-03 04:43:39.760401 | orchestrator | Tuesday 03 February 2026 04:43:34 +0000 (0:00:00.522) 0:00:01.140 ****** 2026-02-03 04:43:39.760406 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:43:39.760411 | orchestrator | 2026-02-03 04:43:39.760414 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-03 04:43:39.760418 | orchestrator | Tuesday 03 February 2026 04:43:35 +0000 (0:00:00.642) 0:00:01.783 ****** 2026-02-03 04:43:39.760425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:39.760431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:39.760435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:39.760439 | orchestrator | 2026-02-03 04:43:39.760443 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-03 04:43:39.760447 | orchestrator | Tuesday 03 February 2026 04:43:36 +0000 (0:00:01.008) 0:00:02.791 ****** 2026-02-03 04:43:39.760451 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-03 04:43:39.760456 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-03 04:43:39.760460 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:43:39.760464 | orchestrator | 2026-02-03 04:43:39.760468 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-03 04:43:39.760472 | orchestrator | Tuesday 03 February 2026 04:43:37 +0000 (0:00:00.876) 0:00:03.667 ****** 2026-02-03 04:43:39.760480 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:43:39.760484 | orchestrator | 2026-02-03 04:43:39.760488 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-03 04:43:39.760574 | orchestrator | Tuesday 03 February 2026 04:43:37 +0000 (0:00:00.566) 0:00:04.234 ****** 2026-02-03 04:43:39.760593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:39.760598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:39.760602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:39.760606 | orchestrator | 2026-02-03 04:43:39.760610 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-03 04:43:39.760614 | orchestrator | Tuesday 03 February 2026 04:43:39 +0000 (0:00:01.389) 0:00:05.624 ****** 2026-02-03 04:43:39.760617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-03 04:43:39.760622 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:43:39.760626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-03 04:43:39.760633 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:43:39.760645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-03 04:43:46.991029 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:43:46.991171 | orchestrator | 2026-02-03 04:43:46.991201 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-03 04:43:46.991222 | orchestrator | Tuesday 03 February 2026 04:43:39 +0000 (0:00:00.610) 0:00:06.234 ****** 2026-02-03 04:43:46.991246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-03 04:43:46.991270 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:43:46.991291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-03 04:43:46.991311 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:43:46.991330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-03 04:43:46.991350 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:43:46.991370 | orchestrator | 2026-02-03 04:43:46.991388 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-03 04:43:46.991406 | orchestrator | Tuesday 03 February 2026 04:43:40 +0000 (0:00:00.710) 0:00:06.944 ****** 2026-02-03 04:43:46.991425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:46.991498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:46.991587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:46.991611 | orchestrator | 2026-02-03 04:43:46.991630 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-03 04:43:46.991649 | orchestrator | Tuesday 03 February 2026 04:43:41 +0000 (0:00:01.339) 0:00:08.284 ****** 2026-02-03 04:43:46.991669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:46.991689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:46.991710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:43:46.991741 | orchestrator | 2026-02-03 04:43:46.991760 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-03 04:43:46.991778 | orchestrator | Tuesday 03 February 2026 04:43:43 +0000 (0:00:01.689) 0:00:09.974 ****** 2026-02-03 04:43:46.991797 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:43:46.991816 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:43:46.991836 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:43:46.991855 | orchestrator | 2026-02-03 04:43:46.991874 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-03 04:43:46.991894 | orchestrator | Tuesday 03 February 2026 04:43:43 +0000 (0:00:00.368) 0:00:10.342 ****** 2026-02-03 04:43:46.991912 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-03 04:43:46.991932 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-03 04:43:46.991950 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-03 04:43:46.991967 | orchestrator | 2026-02-03 04:43:46.991985 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-03 04:43:46.992003 | orchestrator | Tuesday 03 February 2026 04:43:45 +0000 (0:00:01.303) 0:00:11.646 ****** 2026-02-03 04:43:46.992021 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-03 04:43:46.992151 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-03 04:43:46.992172 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-03 04:43:46.992191 | orchestrator | 2026-02-03 04:43:46.992210 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-03 04:43:46.992244 | orchestrator | Tuesday 03 February 2026 04:43:46 +0000 (0:00:01.815) 0:00:13.462 ****** 2026-02-03 04:43:53.708665 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:43:53.708777 | orchestrator | 2026-02-03 04:43:53.708795 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-03 04:43:53.708808 | orchestrator | Tuesday 03 February 2026 04:43:47 +0000 (0:00:00.787) 0:00:14.249 ****** 2026-02-03 04:43:53.708819 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-03 04:43:53.708831 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-03 04:43:53.708842 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:43:53.708854 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:43:53.708865 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:43:53.708876 | orchestrator | 2026-02-03 04:43:53.708887 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-03 04:43:53.708898 | orchestrator | Tuesday 03 February 2026 04:43:48 +0000 (0:00:00.736) 0:00:14.985 ****** 2026-02-03 04:43:53.708909 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:43:53.708920 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:43:53.708931 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:43:53.708942 | orchestrator | 2026-02-03 04:43:53.708953 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-03 04:43:53.708964 | orchestrator | Tuesday 03 February 2026 04:43:48 +0000 (0:00:00.403) 0:00:15.388 ****** 2026-02-03 04:43:53.708979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1084227, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.966338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1084227, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.966338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1084227, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.966338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084303, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9833157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084303, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9833157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084303, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9833157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084249, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.971218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084249, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.971218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1084249, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.971218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084306, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9863157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084306, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9863157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:53.709211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084306, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9863157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.362383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084280, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.975707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.362621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084280, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.975707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084280, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.975707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084290, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.980879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084290, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.980879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084290, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.980879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084224, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.965005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084224, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.965005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1084224, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.965005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084234, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9682796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084234, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9682796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1084234, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9682796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:43:57.363843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084254, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9713154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084254, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9713154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1084254, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9713154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084284, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9773157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084284, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9773157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084284, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9773157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084299, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9823155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084299, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9823155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084299, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9823155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084240, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.970174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084240, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.970174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1084240, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.970174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084288, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.979051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:01.700683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084288, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.979051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.723978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084288, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.979051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084283, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9763155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084283, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9763155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084283, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9763155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084272, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9754088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084272, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9754088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084272, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9754088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084267, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9743156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084267, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9743156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084267, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9743156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9783156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9783156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:05.724360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9783156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.522949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084257, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.973147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084257, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.973147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084257, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.973147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.98154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.98154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.98154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084492, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0203161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084492, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0203161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084492, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0203161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084368, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9990218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084368, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9990218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084368, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9990218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:09.523307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084335, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9898782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.519776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084335, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9898782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.519890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084335, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9898782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.519908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084416, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0013158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.519921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084416, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0013158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.519973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084416, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0013158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.519987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084319, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9875007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.520018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084319, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9875007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.520031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084319, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9875007, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.520043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084461, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.011934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.520054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084461, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.011934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.520079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084461, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.011934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.520091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084420, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.008864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:13.520112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084420, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.008864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084420, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.008864, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084465, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0125277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084465, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0125277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084465, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0125277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084486, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0174868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084486, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0174868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084486, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0174868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084456, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.010316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084456, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.010316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084456, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.010316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084408, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.000316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084408, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.000316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:17.699859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084408, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.000316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084355, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9943156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084355, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9943156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084355, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9943156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084401, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9999633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084401, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9999633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084401, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9999633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084341, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9924345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084341, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9924345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084341, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9924345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084411, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0013158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084411, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0013158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084411, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0013158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:21.388834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084478, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0169952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084478, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0169952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084478, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0169952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084472, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0148861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084472, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0148861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084472, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0148861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084322, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.988041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084322, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.988041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084322, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.988041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084328, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9893157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084328, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9893157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084328, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086462.9893157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084452, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.009316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:44:25.589859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084452, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.009316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:46:08.386598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084452, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.009316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:46:08.386792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084470, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0127225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:46:08.386810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084470, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0127225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:46:08.386815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084470, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770086463.0127225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-03 04:46:08.386819 | orchestrator | 2026-02-03 04:46:08.386825 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-03 04:46:08.386831 | orchestrator | Tuesday 03 February 2026 04:44:26 +0000 (0:00:37.961) 0:00:53.350 ****** 2026-02-03 04:46:08.386835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:46:08.386868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:46:08.386873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-03 04:46:08.386877 | orchestrator | 2026-02-03 04:46:08.386881 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-03 04:46:08.386885 | orchestrator | Tuesday 03 February 2026 04:44:27 +0000 (0:00:01.061) 0:00:54.411 ****** 2026-02-03 04:46:08.386889 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:46:08.386893 | orchestrator | 2026-02-03 04:46:08.386897 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-03 04:46:08.386904 | orchestrator | Tuesday 03 February 2026 04:44:30 +0000 (0:00:02.255) 0:00:56.667 ****** 2026-02-03 04:46:08.386908 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:46:08.386912 | orchestrator | 2026-02-03 04:46:08.386916 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-03 04:46:08.386920 | orchestrator | Tuesday 03 February 2026 04:44:32 +0000 (0:00:02.237) 0:00:58.905 ****** 2026-02-03 04:46:08.386923 | orchestrator | 2026-02-03 04:46:08.386927 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-03 04:46:08.386931 | orchestrator | Tuesday 03 February 2026 04:44:32 +0000 (0:00:00.072) 0:00:58.977 ****** 2026-02-03 04:46:08.386935 | orchestrator | 2026-02-03 04:46:08.386938 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-03 04:46:08.386942 | orchestrator | Tuesday 03 February 2026 04:44:32 +0000 (0:00:00.073) 0:00:59.051 ****** 2026-02-03 04:46:08.386946 | orchestrator | 2026-02-03 04:46:08.386950 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-03 04:46:08.386954 | orchestrator | Tuesday 03 February 2026 04:44:32 +0000 (0:00:00.075) 0:00:59.126 ****** 2026-02-03 04:46:08.386958 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:46:08.386962 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:46:08.386965 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:46:08.386969 | orchestrator | 2026-02-03 04:46:08.386973 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-03 04:46:08.386977 | orchestrator | Tuesday 03 February 2026 04:44:39 +0000 (0:00:07.302) 0:01:06.429 ****** 2026-02-03 04:46:08.386981 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:46:08.386985 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:46:08.386988 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-03 04:46:08.386998 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-03 04:46:08.387001 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-03 04:46:08.387005 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-03 04:46:08.387009 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:46:08.387014 | orchestrator | 2026-02-03 04:46:08.387018 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-03 04:46:08.387022 | orchestrator | Tuesday 03 February 2026 04:45:30 +0000 (0:00:50.699) 0:01:57.129 ****** 2026-02-03 04:46:08.387026 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:46:08.387029 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:46:08.387033 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:46:08.387037 | orchestrator | 2026-02-03 04:46:08.387041 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-03 04:46:08.387044 | orchestrator | Tuesday 03 February 2026 04:46:03 +0000 (0:00:32.462) 0:02:29.591 ****** 2026-02-03 04:46:08.387049 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:46:08.387055 | orchestrator | 2026-02-03 04:46:08.387061 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-03 04:46:08.387067 | orchestrator | Tuesday 03 February 2026 04:46:05 +0000 (0:00:02.177) 0:02:31.769 ****** 2026-02-03 04:46:08.387073 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:46:08.387078 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:46:08.387084 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:46:08.387089 | orchestrator | 2026-02-03 04:46:08.387095 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-03 04:46:08.387100 | orchestrator | Tuesday 03 February 2026 04:46:05 +0000 (0:00:00.342) 0:02:32.111 ****** 2026-02-03 04:46:08.387107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-03 04:46:08.387120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-03 04:46:09.083872 | orchestrator | 2026-02-03 04:46:09.083976 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-03 04:46:09.083993 | orchestrator | Tuesday 03 February 2026 04:46:08 +0000 (0:00:02.750) 0:02:34.861 ****** 2026-02-03 04:46:09.084004 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:46:09.084016 | orchestrator | 2026-02-03 04:46:09.084027 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:46:09.084039 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 04:46:09.084051 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 04:46:09.084062 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 04:46:09.084072 | orchestrator | 2026-02-03 04:46:09.084082 | orchestrator | 2026-02-03 04:46:09.084093 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:46:09.084103 | orchestrator | Tuesday 03 February 2026 04:46:08 +0000 (0:00:00.310) 0:02:35.172 ****** 2026-02-03 04:46:09.084133 | orchestrator | =============================================================================== 2026-02-03 04:46:09.084144 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.70s 2026-02-03 04:46:09.084177 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.96s 2026-02-03 04:46:09.084189 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.46s 2026-02-03 04:46:09.084199 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.30s 2026-02-03 04:46:09.084210 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.75s 2026-02-03 04:46:09.084220 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.26s 2026-02-03 04:46:09.084231 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.24s 2026-02-03 04:46:09.084241 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.18s 2026-02-03 04:46:09.084251 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.82s 2026-02-03 04:46:09.084261 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.69s 2026-02-03 04:46:09.084271 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.39s 2026-02-03 04:46:09.084283 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.34s 2026-02-03 04:46:09.084293 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.30s 2026-02-03 04:46:09.084304 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.06s 2026-02-03 04:46:09.084314 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.01s 2026-02-03 04:46:09.084324 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.88s 2026-02-03 04:46:09.084335 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.79s 2026-02-03 04:46:09.084346 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2026-02-03 04:46:09.084356 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.71s 2026-02-03 04:46:09.084367 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.64s 2026-02-03 04:46:09.459171 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-03 04:46:09.465054 | orchestrator | + set -e 2026-02-03 04:46:09.465106 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 04:46:09.465120 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 04:46:09.465133 | orchestrator | ++ INTERACTIVE=false 2026-02-03 04:46:09.465144 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 04:46:09.465155 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 04:46:09.465174 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 04:46:09.466916 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 04:46:09.466944 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 04:46:09.466956 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 04:46:09.466968 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 04:46:09.466979 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 04:46:09.466991 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 04:46:09.467003 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 04:46:09.467016 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 04:46:09.467028 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 04:46:09.467041 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 04:46:09.467053 | orchestrator | ++ export ARA=false 2026-02-03 04:46:09.467065 | orchestrator | ++ ARA=false 2026-02-03 04:46:09.467077 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 04:46:09.467088 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 04:46:09.467099 | orchestrator | ++ export TEMPEST=false 2026-02-03 04:46:09.467110 | orchestrator | ++ TEMPEST=false 2026-02-03 04:46:09.467209 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 04:46:09.467222 | orchestrator | ++ IS_ZUUL=true 2026-02-03 04:46:09.467233 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 04:46:09.467245 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 04:46:09.467256 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 04:46:09.467271 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 04:46:09.467290 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 04:46:09.467309 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 04:46:09.467328 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 04:46:09.467347 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 04:46:09.467390 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 04:46:09.467402 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 04:46:09.468112 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-03 04:46:09.533679 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 04:46:09.533833 | orchestrator | + osism apply clusterapi 2026-02-03 04:46:11.772155 | orchestrator | 2026-02-03 04:46:11 | INFO  | Task 718e8502-86ff-4eac-b96b-2a7b3a1f9cb9 (clusterapi) was prepared for execution. 2026-02-03 04:46:11.772259 | orchestrator | 2026-02-03 04:46:11 | INFO  | It takes a moment until task 718e8502-86ff-4eac-b96b-2a7b3a1f9cb9 (clusterapi) has been started and output is visible here. 2026-02-03 04:47:06.386690 | orchestrator | 2026-02-03 04:47:06.386844 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-03 04:47:06.386853 | orchestrator | 2026-02-03 04:47:06.386859 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-03 04:47:06.386865 | orchestrator | Tuesday 03 February 2026 04:46:16 +0000 (0:00:00.202) 0:00:00.202 ****** 2026-02-03 04:47:06.386874 | orchestrator | included: cert_manager for testbed-manager 2026-02-03 04:47:06.386881 | orchestrator | 2026-02-03 04:47:06.386888 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-03 04:47:06.386896 | orchestrator | Tuesday 03 February 2026 04:46:16 +0000 (0:00:00.259) 0:00:00.462 ****** 2026-02-03 04:47:06.386904 | orchestrator | changed: [testbed-manager] 2026-02-03 04:47:06.386913 | orchestrator | 2026-02-03 04:47:06.386920 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-03 04:47:06.386927 | orchestrator | Tuesday 03 February 2026 04:46:22 +0000 (0:00:05.413) 0:00:05.875 ****** 2026-02-03 04:47:06.386932 | orchestrator | changed: [testbed-manager] 2026-02-03 04:47:06.386937 | orchestrator | 2026-02-03 04:47:06.386942 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-03 04:47:06.386946 | orchestrator | 2026-02-03 04:47:06.386951 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-03 04:47:06.386956 | orchestrator | Tuesday 03 February 2026 04:46:46 +0000 (0:00:23.975) 0:00:29.851 ****** 2026-02-03 04:47:06.386961 | orchestrator | ok: [testbed-manager] 2026-02-03 04:47:06.386966 | orchestrator | 2026-02-03 04:47:06.386987 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-03 04:47:06.386992 | orchestrator | Tuesday 03 February 2026 04:46:47 +0000 (0:00:01.185) 0:00:31.037 ****** 2026-02-03 04:47:06.386997 | orchestrator | ok: [testbed-manager] 2026-02-03 04:47:06.387001 | orchestrator | 2026-02-03 04:47:06.387005 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-03 04:47:06.387010 | orchestrator | Tuesday 03 February 2026 04:46:47 +0000 (0:00:00.156) 0:00:31.193 ****** 2026-02-03 04:47:06.387015 | orchestrator | ok: [testbed-manager] 2026-02-03 04:47:06.387019 | orchestrator | 2026-02-03 04:47:06.387024 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-03 04:47:06.387028 | orchestrator | Tuesday 03 February 2026 04:47:03 +0000 (0:00:15.795) 0:00:46.989 ****** 2026-02-03 04:47:06.387032 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:47:06.387037 | orchestrator | 2026-02-03 04:47:06.387041 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-03 04:47:06.387046 | orchestrator | Tuesday 03 February 2026 04:47:03 +0000 (0:00:00.142) 0:00:47.131 ****** 2026-02-03 04:47:06.387050 | orchestrator | changed: [testbed-manager] 2026-02-03 04:47:06.387055 | orchestrator | 2026-02-03 04:47:06.387059 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:47:06.387065 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 04:47:06.387071 | orchestrator | 2026-02-03 04:47:06.387075 | orchestrator | 2026-02-03 04:47:06.387079 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:47:06.387084 | orchestrator | Tuesday 03 February 2026 04:47:05 +0000 (0:00:02.392) 0:00:49.523 ****** 2026-02-03 04:47:06.387088 | orchestrator | =============================================================================== 2026-02-03 04:47:06.387114 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 23.98s 2026-02-03 04:47:06.387119 | orchestrator | Initialize the CAPI management cluster --------------------------------- 15.80s 2026-02-03 04:47:06.387123 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.41s 2026-02-03 04:47:06.387128 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.39s 2026-02-03 04:47:06.387132 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.19s 2026-02-03 04:47:06.387136 | orchestrator | Include cert_manager role ----------------------------------------------- 0.26s 2026-02-03 04:47:06.387140 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.16s 2026-02-03 04:47:06.387145 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.14s 2026-02-03 04:47:06.806164 | orchestrator | + osism apply magnum 2026-02-03 04:47:09.189590 | orchestrator | 2026-02-03 04:47:09 | INFO  | Task 0ae93363-861e-4533-8214-994f69736b1c (magnum) was prepared for execution. 2026-02-03 04:47:09.189662 | orchestrator | 2026-02-03 04:47:09 | INFO  | It takes a moment until task 0ae93363-861e-4533-8214-994f69736b1c (magnum) has been started and output is visible here. 2026-02-03 04:47:52.662709 | orchestrator | 2026-02-03 04:47:52.662885 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:47:52.662907 | orchestrator | 2026-02-03 04:47:52.662921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:47:52.662935 | orchestrator | Tuesday 03 February 2026 04:47:13 +0000 (0:00:00.296) 0:00:00.296 ****** 2026-02-03 04:47:52.662946 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:47:52.662959 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:47:52.662970 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:47:52.662982 | orchestrator | 2026-02-03 04:47:52.662993 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:47:52.663004 | orchestrator | Tuesday 03 February 2026 04:47:14 +0000 (0:00:00.332) 0:00:00.629 ****** 2026-02-03 04:47:52.663016 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-03 04:47:52.663027 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-03 04:47:52.663038 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-03 04:47:52.663050 | orchestrator | 2026-02-03 04:47:52.663061 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-03 04:47:52.663072 | orchestrator | 2026-02-03 04:47:52.663083 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-03 04:47:52.663094 | orchestrator | Tuesday 03 February 2026 04:47:14 +0000 (0:00:00.498) 0:00:01.127 ****** 2026-02-03 04:47:52.663106 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:47:52.663118 | orchestrator | 2026-02-03 04:47:52.663129 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-03 04:47:52.663141 | orchestrator | Tuesday 03 February 2026 04:47:15 +0000 (0:00:00.663) 0:00:01.790 ****** 2026-02-03 04:47:52.663152 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-03 04:47:52.663163 | orchestrator | 2026-02-03 04:47:52.663174 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-03 04:47:52.663185 | orchestrator | Tuesday 03 February 2026 04:47:18 +0000 (0:00:03.550) 0:00:05.341 ****** 2026-02-03 04:47:52.663197 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-03 04:47:52.663208 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-03 04:47:52.663220 | orchestrator | 2026-02-03 04:47:52.663231 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-03 04:47:52.663242 | orchestrator | Tuesday 03 February 2026 04:47:25 +0000 (0:00:06.422) 0:00:11.764 ****** 2026-02-03 04:47:52.663280 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-03 04:47:52.663293 | orchestrator | 2026-02-03 04:47:52.663322 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-03 04:47:52.663335 | orchestrator | Tuesday 03 February 2026 04:47:28 +0000 (0:00:03.540) 0:00:15.304 ****** 2026-02-03 04:47:52.663348 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-03 04:47:52.663362 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-03 04:47:52.663375 | orchestrator | 2026-02-03 04:47:52.663388 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-03 04:47:52.663401 | orchestrator | Tuesday 03 February 2026 04:47:32 +0000 (0:00:03.885) 0:00:19.190 ****** 2026-02-03 04:47:52.663414 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-03 04:47:52.663427 | orchestrator | 2026-02-03 04:47:52.663440 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-03 04:47:52.663453 | orchestrator | Tuesday 03 February 2026 04:47:36 +0000 (0:00:03.314) 0:00:22.504 ****** 2026-02-03 04:47:52.663466 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-03 04:47:52.663479 | orchestrator | 2026-02-03 04:47:52.663492 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-03 04:47:52.663505 | orchestrator | Tuesday 03 February 2026 04:47:39 +0000 (0:00:03.904) 0:00:26.409 ****** 2026-02-03 04:47:52.663518 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:47:52.663531 | orchestrator | 2026-02-03 04:47:52.663566 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-03 04:47:52.663580 | orchestrator | Tuesday 03 February 2026 04:47:43 +0000 (0:00:03.406) 0:00:29.816 ****** 2026-02-03 04:47:52.663593 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:47:52.663606 | orchestrator | 2026-02-03 04:47:52.663619 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-03 04:47:52.663630 | orchestrator | Tuesday 03 February 2026 04:47:47 +0000 (0:00:04.015) 0:00:33.831 ****** 2026-02-03 04:47:52.663641 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:47:52.663652 | orchestrator | 2026-02-03 04:47:52.663663 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-03 04:47:52.663675 | orchestrator | Tuesday 03 February 2026 04:47:50 +0000 (0:00:03.560) 0:00:37.392 ****** 2026-02-03 04:47:52.663710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:47:52.663727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:47:52.663755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:47:52.663769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:47:52.663781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:47:52.663802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:00.400571 | orchestrator | 2026-02-03 04:48:00.400685 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-03 04:48:00.400706 | orchestrator | Tuesday 03 February 2026 04:47:52 +0000 (0:00:01.658) 0:00:39.050 ****** 2026-02-03 04:48:00.400718 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:48:00.400730 | orchestrator | 2026-02-03 04:48:00.400742 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-03 04:48:00.400753 | orchestrator | Tuesday 03 February 2026 04:47:52 +0000 (0:00:00.191) 0:00:39.242 ****** 2026-02-03 04:48:00.400790 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:48:00.400802 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:48:00.400813 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:48:00.400871 | orchestrator | 2026-02-03 04:48:00.400884 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-03 04:48:00.400894 | orchestrator | Tuesday 03 February 2026 04:47:53 +0000 (0:00:00.320) 0:00:39.563 ****** 2026-02-03 04:48:00.400904 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 04:48:00.400914 | orchestrator | 2026-02-03 04:48:00.400925 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-03 04:48:00.400936 | orchestrator | Tuesday 03 February 2026 04:47:54 +0000 (0:00:00.902) 0:00:40.466 ****** 2026-02-03 04:48:00.400965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:00.400980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:00.400992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:00.401025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:00.401051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:00.401068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:00.401080 | orchestrator | 2026-02-03 04:48:00.401090 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-03 04:48:00.401101 | orchestrator | Tuesday 03 February 2026 04:47:56 +0000 (0:00:02.489) 0:00:42.956 ****** 2026-02-03 04:48:00.401112 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:48:00.401125 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:48:00.401136 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:48:00.401147 | orchestrator | 2026-02-03 04:48:00.401159 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-03 04:48:00.401171 | orchestrator | Tuesday 03 February 2026 04:47:57 +0000 (0:00:00.570) 0:00:43.526 ****** 2026-02-03 04:48:00.401184 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:48:00.401198 | orchestrator | 2026-02-03 04:48:00.401208 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-03 04:48:00.401218 | orchestrator | Tuesday 03 February 2026 04:47:57 +0000 (0:00:00.595) 0:00:44.122 ****** 2026-02-03 04:48:00.401230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:00.401253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:01.396434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:01.396541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:01.396555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:01.396566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:01.396576 | orchestrator | 2026-02-03 04:48:01.396588 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-03 04:48:01.396629 | orchestrator | Tuesday 03 February 2026 04:48:00 +0000 (0:00:02.682) 0:00:46.804 ****** 2026-02-03 04:48:01.396655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 04:48:01.396665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:48:01.396675 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:48:01.396689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 04:48:01.396699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:48:01.396708 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:48:01.396717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 04:48:01.396744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:48:05.149323 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:48:05.149428 | orchestrator | 2026-02-03 04:48:05.149460 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-03 04:48:05.149473 | orchestrator | Tuesday 03 February 2026 04:48:01 +0000 (0:00:00.986) 0:00:47.791 ****** 2026-02-03 04:48:05.149488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 04:48:05.149521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:48:05.149534 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:48:05.149546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 04:48:05.149582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:48:05.149594 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:48:05.149625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 04:48:05.149638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:48:05.149649 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:48:05.149660 | orchestrator | 2026-02-03 04:48:05.149677 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-03 04:48:05.149689 | orchestrator | Tuesday 03 February 2026 04:48:02 +0000 (0:00:00.943) 0:00:48.734 ****** 2026-02-03 04:48:05.149702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:05.149722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:05.149742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:11.568474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:11.568565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:11.568572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:11.568593 | orchestrator | 2026-02-03 04:48:11.568601 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-03 04:48:11.568607 | orchestrator | Tuesday 03 February 2026 04:48:05 +0000 (0:00:02.819) 0:00:51.554 ****** 2026-02-03 04:48:11.568612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:11.568629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:11.568635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:11.568642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:11.568651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:11.568655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:11.568660 | orchestrator | 2026-02-03 04:48:11.568665 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-03 04:48:11.568669 | orchestrator | Tuesday 03 February 2026 04:48:10 +0000 (0:00:05.681) 0:00:57.236 ****** 2026-02-03 04:48:11.568679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 04:48:13.585187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:48:13.585307 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:48:13.585344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 04:48:13.585384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:48:13.585396 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:48:13.585408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-03 04:48:13.585448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 04:48:13.585461 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:48:13.585473 | orchestrator | 2026-02-03 04:48:13.585486 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-03 04:48:13.585499 | orchestrator | Tuesday 03 February 2026 04:48:11 +0000 (0:00:00.738) 0:00:57.974 ****** 2026-02-03 04:48:13.585516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:13.585537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:13.585549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-03 04:48:13.585561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:48:13.585581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:49:07.910885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-03 04:49:07.911087 | orchestrator | 2026-02-03 04:49:07.911106 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-03 04:49:07.911120 | orchestrator | Tuesday 03 February 2026 04:48:13 +0000 (0:00:02.006) 0:00:59.981 ****** 2026-02-03 04:49:07.911131 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:49:07.911143 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:49:07.911154 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:49:07.911165 | orchestrator | 2026-02-03 04:49:07.911177 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-03 04:49:07.911189 | orchestrator | Tuesday 03 February 2026 04:48:14 +0000 (0:00:00.566) 0:01:00.547 ****** 2026-02-03 04:49:07.911200 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:49:07.911211 | orchestrator | 2026-02-03 04:49:07.911222 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-03 04:49:07.911233 | orchestrator | Tuesday 03 February 2026 04:48:16 +0000 (0:00:02.187) 0:01:02.735 ****** 2026-02-03 04:49:07.911244 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:49:07.911255 | orchestrator | 2026-02-03 04:49:07.911266 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-03 04:49:07.911277 | orchestrator | Tuesday 03 February 2026 04:48:18 +0000 (0:00:02.320) 0:01:05.055 ****** 2026-02-03 04:49:07.911288 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:49:07.911299 | orchestrator | 2026-02-03 04:49:07.911310 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-03 04:49:07.911321 | orchestrator | Tuesday 03 February 2026 04:48:35 +0000 (0:00:17.168) 0:01:22.224 ****** 2026-02-03 04:49:07.911332 | orchestrator | 2026-02-03 04:49:07.911343 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-03 04:49:07.911353 | orchestrator | Tuesday 03 February 2026 04:48:35 +0000 (0:00:00.077) 0:01:22.301 ****** 2026-02-03 04:49:07.911364 | orchestrator | 2026-02-03 04:49:07.911375 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-03 04:49:07.911386 | orchestrator | Tuesday 03 February 2026 04:48:35 +0000 (0:00:00.075) 0:01:22.377 ****** 2026-02-03 04:49:07.911397 | orchestrator | 2026-02-03 04:49:07.911410 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-03 04:49:07.911423 | orchestrator | Tuesday 03 February 2026 04:48:36 +0000 (0:00:00.085) 0:01:22.463 ****** 2026-02-03 04:49:07.911437 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:49:07.911449 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:49:07.911462 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:49:07.911475 | orchestrator | 2026-02-03 04:49:07.911488 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-03 04:49:07.911501 | orchestrator | Tuesday 03 February 2026 04:48:56 +0000 (0:00:20.275) 0:01:42.739 ****** 2026-02-03 04:49:07.911514 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:49:07.911527 | orchestrator | changed: [testbed-node-1] 2026-02-03 04:49:07.911540 | orchestrator | changed: [testbed-node-2] 2026-02-03 04:49:07.911552 | orchestrator | 2026-02-03 04:49:07.911566 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:49:07.911580 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 04:49:07.911594 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 04:49:07.911608 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-03 04:49:07.911628 | orchestrator | 2026-02-03 04:49:07.911642 | orchestrator | 2026-02-03 04:49:07.911656 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:49:07.911669 | orchestrator | Tuesday 03 February 2026 04:49:07 +0000 (0:00:11.144) 0:01:53.884 ****** 2026-02-03 04:49:07.911682 | orchestrator | =============================================================================== 2026-02-03 04:49:07.911694 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.28s 2026-02-03 04:49:07.911707 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.17s 2026-02-03 04:49:07.911721 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.15s 2026-02-03 04:49:07.911734 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.42s 2026-02-03 04:49:07.911748 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.68s 2026-02-03 04:49:07.911761 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.02s 2026-02-03 04:49:07.911774 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.90s 2026-02-03 04:49:07.911802 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.89s 2026-02-03 04:49:07.911814 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.56s 2026-02-03 04:49:07.911825 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.55s 2026-02-03 04:49:07.911836 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.54s 2026-02-03 04:49:07.911847 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.41s 2026-02-03 04:49:07.911858 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.31s 2026-02-03 04:49:07.911875 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.82s 2026-02-03 04:49:07.911887 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.68s 2026-02-03 04:49:07.911898 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.49s 2026-02-03 04:49:07.911927 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.32s 2026-02-03 04:49:07.911938 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.19s 2026-02-03 04:49:07.911949 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.01s 2026-02-03 04:49:07.911960 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.66s 2026-02-03 04:49:08.630708 | orchestrator | ok: Runtime: 1:44:35.543872 2026-02-03 04:49:08.858417 | 2026-02-03 04:49:08.858552 | TASK [Deploy in a nutshell] 2026-02-03 04:49:09.391994 | orchestrator | skipping: Conditional result was False 2026-02-03 04:49:09.419632 | 2026-02-03 04:49:09.419773 | TASK [Bootstrap services] 2026-02-03 04:49:10.138347 | orchestrator | 2026-02-03 04:49:10.138545 | orchestrator | # BOOTSTRAP 2026-02-03 04:49:10.138571 | orchestrator | 2026-02-03 04:49:10.138586 | orchestrator | + set -e 2026-02-03 04:49:10.138599 | orchestrator | + echo 2026-02-03 04:49:10.138613 | orchestrator | + echo '# BOOTSTRAP' 2026-02-03 04:49:10.138631 | orchestrator | + echo 2026-02-03 04:49:10.138675 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-03 04:49:10.148456 | orchestrator | + set -e 2026-02-03 04:49:10.148554 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-03 04:49:12.504852 | orchestrator | 2026-02-03 04:49:12 | INFO  | It takes a moment until task b99d57fe-2ec5-4a6b-9fe7-9157f5cea437 (flavor-manager) has been started and output is visible here. 2026-02-03 04:49:20.726853 | orchestrator | 2026-02-03 04:49:16 | INFO  | Flavor SCS-1L-1 created 2026-02-03 04:49:20.727019 | orchestrator | 2026-02-03 04:49:16 | INFO  | Flavor SCS-1L-1-5 created 2026-02-03 04:49:20.727041 | orchestrator | 2026-02-03 04:49:16 | INFO  | Flavor SCS-1V-2 created 2026-02-03 04:49:20.727054 | orchestrator | 2026-02-03 04:49:16 | INFO  | Flavor SCS-1V-2-5 created 2026-02-03 04:49:20.727066 | orchestrator | 2026-02-03 04:49:16 | INFO  | Flavor SCS-1V-4 created 2026-02-03 04:49:20.727078 | orchestrator | 2026-02-03 04:49:17 | INFO  | Flavor SCS-1V-4-10 created 2026-02-03 04:49:20.727090 | orchestrator | 2026-02-03 04:49:17 | INFO  | Flavor SCS-1V-8 created 2026-02-03 04:49:20.727102 | orchestrator | 2026-02-03 04:49:17 | INFO  | Flavor SCS-1V-8-20 created 2026-02-03 04:49:20.727128 | orchestrator | 2026-02-03 04:49:17 | INFO  | Flavor SCS-2V-4 created 2026-02-03 04:49:20.727140 | orchestrator | 2026-02-03 04:49:17 | INFO  | Flavor SCS-2V-4-10 created 2026-02-03 04:49:20.727151 | orchestrator | 2026-02-03 04:49:17 | INFO  | Flavor SCS-2V-8 created 2026-02-03 04:49:20.727162 | orchestrator | 2026-02-03 04:49:17 | INFO  | Flavor SCS-2V-8-20 created 2026-02-03 04:49:20.727173 | orchestrator | 2026-02-03 04:49:18 | INFO  | Flavor SCS-2V-16 created 2026-02-03 04:49:20.727184 | orchestrator | 2026-02-03 04:49:18 | INFO  | Flavor SCS-2V-16-50 created 2026-02-03 04:49:20.727195 | orchestrator | 2026-02-03 04:49:18 | INFO  | Flavor SCS-4V-8 created 2026-02-03 04:49:20.727206 | orchestrator | 2026-02-03 04:49:18 | INFO  | Flavor SCS-4V-8-20 created 2026-02-03 04:49:20.727217 | orchestrator | 2026-02-03 04:49:18 | INFO  | Flavor SCS-4V-16 created 2026-02-03 04:49:20.727228 | orchestrator | 2026-02-03 04:49:18 | INFO  | Flavor SCS-4V-16-50 created 2026-02-03 04:49:20.727239 | orchestrator | 2026-02-03 04:49:18 | INFO  | Flavor SCS-4V-32 created 2026-02-03 04:49:20.727250 | orchestrator | 2026-02-03 04:49:19 | INFO  | Flavor SCS-4V-32-100 created 2026-02-03 04:49:20.727261 | orchestrator | 2026-02-03 04:49:19 | INFO  | Flavor SCS-8V-16 created 2026-02-03 04:49:20.727272 | orchestrator | 2026-02-03 04:49:19 | INFO  | Flavor SCS-8V-16-50 created 2026-02-03 04:49:20.727284 | orchestrator | 2026-02-03 04:49:19 | INFO  | Flavor SCS-8V-32 created 2026-02-03 04:49:20.727294 | orchestrator | 2026-02-03 04:49:19 | INFO  | Flavor SCS-8V-32-100 created 2026-02-03 04:49:20.727305 | orchestrator | 2026-02-03 04:49:19 | INFO  | Flavor SCS-16V-32 created 2026-02-03 04:49:20.727317 | orchestrator | 2026-02-03 04:49:20 | INFO  | Flavor SCS-16V-32-100 created 2026-02-03 04:49:20.727327 | orchestrator | 2026-02-03 04:49:20 | INFO  | Flavor SCS-2V-4-20s created 2026-02-03 04:49:20.727338 | orchestrator | 2026-02-03 04:49:20 | INFO  | Flavor SCS-4V-8-50s created 2026-02-03 04:49:20.727349 | orchestrator | 2026-02-03 04:49:20 | INFO  | Flavor SCS-8V-32-100s created 2026-02-03 04:49:23.180642 | orchestrator | 2026-02-03 04:49:23 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-03 04:49:33.372476 | orchestrator | 2026-02-03 04:49:33 | INFO  | Task 704c9bf6-a883-47d0-8612-9d7189a8e5f9 (bootstrap-basic) was prepared for execution. 2026-02-03 04:49:33.372586 | orchestrator | 2026-02-03 04:49:33 | INFO  | It takes a moment until task 704c9bf6-a883-47d0-8612-9d7189a8e5f9 (bootstrap-basic) has been started and output is visible here. 2026-02-03 04:50:20.339198 | orchestrator | 2026-02-03 04:50:20.339387 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-03 04:50:20.339446 | orchestrator | 2026-02-03 04:50:20.339460 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 04:50:20.339474 | orchestrator | Tuesday 03 February 2026 04:49:38 +0000 (0:00:00.072) 0:00:00.072 ****** 2026-02-03 04:50:20.339491 | orchestrator | ok: [localhost] 2026-02-03 04:50:20.339508 | orchestrator | 2026-02-03 04:50:20.339524 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-03 04:50:20.339542 | orchestrator | Tuesday 03 February 2026 04:49:40 +0000 (0:00:01.961) 0:00:02.033 ****** 2026-02-03 04:50:20.339559 | orchestrator | ok: [localhost] 2026-02-03 04:50:20.339575 | orchestrator | 2026-02-03 04:50:20.339591 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-03 04:50:20.339602 | orchestrator | Tuesday 03 February 2026 04:49:47 +0000 (0:00:07.390) 0:00:09.424 ****** 2026-02-03 04:50:20.339612 | orchestrator | changed: [localhost] 2026-02-03 04:50:20.339622 | orchestrator | 2026-02-03 04:50:20.339632 | orchestrator | TASK [Create public network] *************************************************** 2026-02-03 04:50:20.339642 | orchestrator | Tuesday 03 February 2026 04:49:54 +0000 (0:00:06.684) 0:00:16.109 ****** 2026-02-03 04:50:20.339653 | orchestrator | changed: [localhost] 2026-02-03 04:50:20.339663 | orchestrator | 2026-02-03 04:50:20.339673 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-03 04:50:20.339683 | orchestrator | Tuesday 03 February 2026 04:50:00 +0000 (0:00:05.775) 0:00:21.885 ****** 2026-02-03 04:50:20.339698 | orchestrator | changed: [localhost] 2026-02-03 04:50:20.339709 | orchestrator | 2026-02-03 04:50:20.339721 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-03 04:50:20.339733 | orchestrator | Tuesday 03 February 2026 04:50:07 +0000 (0:00:07.049) 0:00:28.935 ****** 2026-02-03 04:50:20.339744 | orchestrator | changed: [localhost] 2026-02-03 04:50:20.339756 | orchestrator | 2026-02-03 04:50:20.339767 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-03 04:50:20.339778 | orchestrator | Tuesday 03 February 2026 04:50:11 +0000 (0:00:04.584) 0:00:33.519 ****** 2026-02-03 04:50:20.339790 | orchestrator | changed: [localhost] 2026-02-03 04:50:20.339801 | orchestrator | 2026-02-03 04:50:20.339813 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-03 04:50:20.339835 | orchestrator | Tuesday 03 February 2026 04:50:16 +0000 (0:00:04.278) 0:00:37.798 ****** 2026-02-03 04:50:20.339847 | orchestrator | ok: [localhost] 2026-02-03 04:50:20.339858 | orchestrator | 2026-02-03 04:50:20.339870 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:50:20.339882 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 04:50:20.339895 | orchestrator | 2026-02-03 04:50:20.339906 | orchestrator | 2026-02-03 04:50:20.339917 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:50:20.339928 | orchestrator | Tuesday 03 February 2026 04:50:20 +0000 (0:00:03.744) 0:00:41.542 ****** 2026-02-03 04:50:20.339940 | orchestrator | =============================================================================== 2026-02-03 04:50:20.339951 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.39s 2026-02-03 04:50:20.339963 | orchestrator | Set public network to default ------------------------------------------- 7.05s 2026-02-03 04:50:20.339975 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.68s 2026-02-03 04:50:20.340014 | orchestrator | Create public network --------------------------------------------------- 5.78s 2026-02-03 04:50:20.340050 | orchestrator | Create public subnet ---------------------------------------------------- 4.58s 2026-02-03 04:50:20.340060 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.28s 2026-02-03 04:50:20.340070 | orchestrator | Create manager role ----------------------------------------------------- 3.74s 2026-02-03 04:50:20.340080 | orchestrator | Gathering Facts --------------------------------------------------------- 1.96s 2026-02-03 04:50:23.246542 | orchestrator | 2026-02-03 04:50:23 | INFO  | It takes a moment until task 54d48111-e7b7-4ffa-bf2a-849439dee399 (image-manager) has been started and output is visible here. 2026-02-03 04:51:07.176888 | orchestrator | 2026-02-03 04:50:26 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-03 04:51:07.177032 | orchestrator | 2026-02-03 04:50:26 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-03 04:51:07.177104 | orchestrator | 2026-02-03 04:50:26 | INFO  | Importing image Cirros 0.6.2 2026-02-03 04:51:07.177124 | orchestrator | 2026-02-03 04:50:26 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-03 04:51:07.177144 | orchestrator | 2026-02-03 04:50:28 | INFO  | Waiting for image to leave queued state... 2026-02-03 04:51:07.177158 | orchestrator | 2026-02-03 04:50:30 | INFO  | Waiting for import to complete... 2026-02-03 04:51:07.177169 | orchestrator | 2026-02-03 04:50:40 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-03 04:51:07.177182 | orchestrator | 2026-02-03 04:50:41 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-03 04:51:07.177194 | orchestrator | 2026-02-03 04:50:41 | INFO  | Setting internal_version = 0.6.2 2026-02-03 04:51:07.177205 | orchestrator | 2026-02-03 04:50:41 | INFO  | Setting image_original_user = cirros 2026-02-03 04:51:07.177217 | orchestrator | 2026-02-03 04:50:41 | INFO  | Adding tag os:cirros 2026-02-03 04:51:07.177228 | orchestrator | 2026-02-03 04:50:41 | INFO  | Setting property architecture: x86_64 2026-02-03 04:51:07.177238 | orchestrator | 2026-02-03 04:50:41 | INFO  | Setting property hw_disk_bus: scsi 2026-02-03 04:51:07.177249 | orchestrator | 2026-02-03 04:50:42 | INFO  | Setting property hw_rng_model: virtio 2026-02-03 04:51:07.177260 | orchestrator | 2026-02-03 04:50:42 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-03 04:51:07.177271 | orchestrator | 2026-02-03 04:50:42 | INFO  | Setting property hw_watchdog_action: reset 2026-02-03 04:51:07.177282 | orchestrator | 2026-02-03 04:50:42 | INFO  | Setting property hypervisor_type: qemu 2026-02-03 04:51:07.177294 | orchestrator | 2026-02-03 04:50:42 | INFO  | Setting property os_distro: cirros 2026-02-03 04:51:07.177304 | orchestrator | 2026-02-03 04:50:43 | INFO  | Setting property os_purpose: minimal 2026-02-03 04:51:07.177315 | orchestrator | 2026-02-03 04:50:43 | INFO  | Setting property replace_frequency: never 2026-02-03 04:51:07.177327 | orchestrator | 2026-02-03 04:50:43 | INFO  | Setting property uuid_validity: none 2026-02-03 04:51:07.177337 | orchestrator | 2026-02-03 04:50:43 | INFO  | Setting property provided_until: none 2026-02-03 04:51:07.177348 | orchestrator | 2026-02-03 04:50:44 | INFO  | Setting property image_description: Cirros 2026-02-03 04:51:07.177361 | orchestrator | 2026-02-03 04:50:44 | INFO  | Setting property image_name: Cirros 2026-02-03 04:51:07.177374 | orchestrator | 2026-02-03 04:50:44 | INFO  | Setting property internal_version: 0.6.2 2026-02-03 04:51:07.177387 | orchestrator | 2026-02-03 04:50:45 | INFO  | Setting property image_original_user: cirros 2026-02-03 04:51:07.177427 | orchestrator | 2026-02-03 04:50:45 | INFO  | Setting property os_version: 0.6.2 2026-02-03 04:51:07.177451 | orchestrator | 2026-02-03 04:50:45 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-03 04:51:07.177465 | orchestrator | 2026-02-03 04:50:45 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-03 04:51:07.177478 | orchestrator | 2026-02-03 04:50:46 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-03 04:51:07.177492 | orchestrator | 2026-02-03 04:50:46 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-03 04:51:07.177505 | orchestrator | 2026-02-03 04:50:46 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-03 04:51:07.177518 | orchestrator | 2026-02-03 04:50:46 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-03 04:51:07.177537 | orchestrator | 2026-02-03 04:50:46 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-03 04:51:07.177549 | orchestrator | 2026-02-03 04:50:46 | INFO  | Importing image Cirros 0.6.3 2026-02-03 04:51:07.177562 | orchestrator | 2026-02-03 04:50:46 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-03 04:51:07.177575 | orchestrator | 2026-02-03 04:50:47 | INFO  | Waiting for image to leave queued state... 2026-02-03 04:51:07.177588 | orchestrator | 2026-02-03 04:50:49 | INFO  | Waiting for import to complete... 2026-02-03 04:51:07.177629 | orchestrator | 2026-02-03 04:51:00 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-03 04:51:07.177650 | orchestrator | 2026-02-03 04:51:00 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-03 04:51:07.177668 | orchestrator | 2026-02-03 04:51:00 | INFO  | Setting internal_version = 0.6.3 2026-02-03 04:51:07.177688 | orchestrator | 2026-02-03 04:51:00 | INFO  | Setting image_original_user = cirros 2026-02-03 04:51:07.177707 | orchestrator | 2026-02-03 04:51:00 | INFO  | Adding tag os:cirros 2026-02-03 04:51:07.177726 | orchestrator | 2026-02-03 04:51:01 | INFO  | Setting property architecture: x86_64 2026-02-03 04:51:07.177744 | orchestrator | 2026-02-03 04:51:01 | INFO  | Setting property hw_disk_bus: scsi 2026-02-03 04:51:07.177758 | orchestrator | 2026-02-03 04:51:02 | INFO  | Setting property hw_rng_model: virtio 2026-02-03 04:51:07.177768 | orchestrator | 2026-02-03 04:51:02 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-03 04:51:07.177787 | orchestrator | 2026-02-03 04:51:02 | INFO  | Setting property hw_watchdog_action: reset 2026-02-03 04:51:07.177805 | orchestrator | 2026-02-03 04:51:02 | INFO  | Setting property hypervisor_type: qemu 2026-02-03 04:51:07.177823 | orchestrator | 2026-02-03 04:51:03 | INFO  | Setting property os_distro: cirros 2026-02-03 04:51:07.177840 | orchestrator | 2026-02-03 04:51:03 | INFO  | Setting property os_purpose: minimal 2026-02-03 04:51:07.177858 | orchestrator | 2026-02-03 04:51:03 | INFO  | Setting property replace_frequency: never 2026-02-03 04:51:07.177876 | orchestrator | 2026-02-03 04:51:03 | INFO  | Setting property uuid_validity: none 2026-02-03 04:51:07.177893 | orchestrator | 2026-02-03 04:51:04 | INFO  | Setting property provided_until: none 2026-02-03 04:51:07.177910 | orchestrator | 2026-02-03 04:51:04 | INFO  | Setting property image_description: Cirros 2026-02-03 04:51:07.177928 | orchestrator | 2026-02-03 04:51:04 | INFO  | Setting property image_name: Cirros 2026-02-03 04:51:07.177945 | orchestrator | 2026-02-03 04:51:04 | INFO  | Setting property internal_version: 0.6.3 2026-02-03 04:51:07.177979 | orchestrator | 2026-02-03 04:51:05 | INFO  | Setting property image_original_user: cirros 2026-02-03 04:51:07.177996 | orchestrator | 2026-02-03 04:51:05 | INFO  | Setting property os_version: 0.6.3 2026-02-03 04:51:07.178130 | orchestrator | 2026-02-03 04:51:05 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-03 04:51:07.178156 | orchestrator | 2026-02-03 04:51:05 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-03 04:51:07.178176 | orchestrator | 2026-02-03 04:51:06 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-03 04:51:07.178196 | orchestrator | 2026-02-03 04:51:06 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-03 04:51:07.178215 | orchestrator | 2026-02-03 04:51:06 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-03 04:51:07.517169 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-03 04:51:09.977531 | orchestrator | 2026-02-03 04:51:09 | INFO  | date: 2026-02-03 2026-02-03 04:51:09.977657 | orchestrator | 2026-02-03 04:51:09 | INFO  | image: octavia-amphora-haproxy-2024.2.20260203.qcow2 2026-02-03 04:51:09.977989 | orchestrator | 2026-02-03 04:51:09 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260203.qcow2 2026-02-03 04:51:09.978186 | orchestrator | 2026-02-03 04:51:09 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260203.qcow2.CHECKSUM 2026-02-03 04:51:10.151255 | orchestrator | 2026-02-03 04:51:10 | INFO  | checksum: d880b7d1e69be114deed8e1ea6aae1bb461587b7fcd8cdc7a6dedf8496c970b1 2026-02-03 04:51:10.249149 | orchestrator | 2026-02-03 04:51:10 | INFO  | It takes a moment until task 64e2b91c-74d1-440b-b109-4a49ef9a6c8b (image-manager) has been started and output is visible here. 2026-02-03 04:52:23.198288 | orchestrator | 2026-02-03 04:51:12 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-03' 2026-02-03 04:52:23.198407 | orchestrator | 2026-02-03 04:51:12 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260203.qcow2: 200 2026-02-03 04:52:23.198424 | orchestrator | 2026-02-03 04:51:12 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-03 2026-02-03 04:52:23.198436 | orchestrator | 2026-02-03 04:51:12 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260203.qcow2 2026-02-03 04:52:23.198449 | orchestrator | 2026-02-03 04:51:14 | INFO  | Waiting for image to leave queued state... 2026-02-03 04:52:23.198460 | orchestrator | 2026-02-03 04:51:16 | INFO  | Waiting for import to complete... 2026-02-03 04:52:23.198471 | orchestrator | 2026-02-03 04:51:26 | INFO  | Waiting for import to complete... 2026-02-03 04:52:23.198482 | orchestrator | 2026-02-03 04:51:36 | INFO  | Waiting for import to complete... 2026-02-03 04:52:23.198493 | orchestrator | 2026-02-03 04:51:46 | INFO  | Waiting for import to complete... 2026-02-03 04:52:23.198506 | orchestrator | 2026-02-03 04:51:56 | INFO  | Waiting for import to complete... 2026-02-03 04:52:23.198518 | orchestrator | 2026-02-03 04:52:06 | INFO  | Waiting for import to complete... 2026-02-03 04:52:23.198530 | orchestrator | 2026-02-03 04:52:17 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-03' successfully completed, reloading images 2026-02-03 04:52:23.198542 | orchestrator | 2026-02-03 04:52:17 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-03' 2026-02-03 04:52:23.198575 | orchestrator | 2026-02-03 04:52:17 | INFO  | Setting internal_version = 2026-02-03 2026-02-03 04:52:23.198587 | orchestrator | 2026-02-03 04:52:17 | INFO  | Setting image_original_user = ubuntu 2026-02-03 04:52:23.198598 | orchestrator | 2026-02-03 04:52:17 | INFO  | Adding tag amphora 2026-02-03 04:52:23.198609 | orchestrator | 2026-02-03 04:52:17 | INFO  | Adding tag os:ubuntu 2026-02-03 04:52:23.198620 | orchestrator | 2026-02-03 04:52:18 | INFO  | Setting property architecture: x86_64 2026-02-03 04:52:23.198630 | orchestrator | 2026-02-03 04:52:18 | INFO  | Setting property hw_disk_bus: scsi 2026-02-03 04:52:23.198641 | orchestrator | 2026-02-03 04:52:18 | INFO  | Setting property hw_rng_model: virtio 2026-02-03 04:52:23.198652 | orchestrator | 2026-02-03 04:52:18 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-03 04:52:23.198663 | orchestrator | 2026-02-03 04:52:19 | INFO  | Setting property hw_watchdog_action: reset 2026-02-03 04:52:23.198674 | orchestrator | 2026-02-03 04:52:19 | INFO  | Setting property hypervisor_type: qemu 2026-02-03 04:52:23.198685 | orchestrator | 2026-02-03 04:52:19 | INFO  | Setting property os_distro: ubuntu 2026-02-03 04:52:23.198695 | orchestrator | 2026-02-03 04:52:19 | INFO  | Setting property replace_frequency: quarterly 2026-02-03 04:52:23.198706 | orchestrator | 2026-02-03 04:52:20 | INFO  | Setting property uuid_validity: last-1 2026-02-03 04:52:23.198717 | orchestrator | 2026-02-03 04:52:20 | INFO  | Setting property provided_until: none 2026-02-03 04:52:23.198727 | orchestrator | 2026-02-03 04:52:20 | INFO  | Setting property os_purpose: network 2026-02-03 04:52:23.198752 | orchestrator | 2026-02-03 04:52:20 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-03 04:52:23.198764 | orchestrator | 2026-02-03 04:52:21 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-03 04:52:23.198775 | orchestrator | 2026-02-03 04:52:21 | INFO  | Setting property internal_version: 2026-02-03 2026-02-03 04:52:23.198786 | orchestrator | 2026-02-03 04:52:21 | INFO  | Setting property image_original_user: ubuntu 2026-02-03 04:52:23.198796 | orchestrator | 2026-02-03 04:52:21 | INFO  | Setting property os_version: 2026-02-03 2026-02-03 04:52:23.198807 | orchestrator | 2026-02-03 04:52:22 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260203.qcow2 2026-02-03 04:52:23.198818 | orchestrator | 2026-02-03 04:52:22 | INFO  | Setting property image_build_date: 2026-02-03 2026-02-03 04:52:23.198829 | orchestrator | 2026-02-03 04:52:22 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-03' 2026-02-03 04:52:23.198840 | orchestrator | 2026-02-03 04:52:22 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-03' 2026-02-03 04:52:23.198869 | orchestrator | 2026-02-03 04:52:22 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-03 04:52:23.198880 | orchestrator | 2026-02-03 04:52:22 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-03 04:52:23.198892 | orchestrator | 2026-02-03 04:52:22 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-03 04:52:23.198903 | orchestrator | 2026-02-03 04:52:22 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-03 04:52:23.651923 | orchestrator | ok: Runtime: 0:03:13.868324 2026-02-03 04:52:23.665385 | 2026-02-03 04:52:23.665511 | TASK [Run checks] 2026-02-03 04:52:24.439730 | orchestrator | + set -e 2026-02-03 04:52:24.439878 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 04:52:24.439894 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 04:52:24.439908 | orchestrator | ++ INTERACTIVE=false 2026-02-03 04:52:24.439916 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 04:52:24.439924 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 04:52:24.439932 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-03 04:52:24.440755 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-03 04:52:24.447736 | orchestrator | 2026-02-03 04:52:24.447798 | orchestrator | # CHECK 2026-02-03 04:52:24.447807 | orchestrator | 2026-02-03 04:52:24.447814 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 04:52:24.447825 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 04:52:24.447832 | orchestrator | + echo 2026-02-03 04:52:24.447839 | orchestrator | + echo '# CHECK' 2026-02-03 04:52:24.447845 | orchestrator | + echo 2026-02-03 04:52:24.447855 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-03 04:52:24.448686 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-03 04:52:24.517247 | orchestrator | 2026-02-03 04:52:24.517359 | orchestrator | ## Containers @ testbed-manager 2026-02-03 04:52:24.517377 | orchestrator | 2026-02-03 04:52:24.517392 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-03 04:52:24.517404 | orchestrator | + echo 2026-02-03 04:52:24.517416 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-03 04:52:24.517429 | orchestrator | + echo 2026-02-03 04:52:24.517449 | orchestrator | + osism container testbed-manager ps 2026-02-03 04:52:26.753952 | orchestrator | 2026-02-03 04:52:26 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-03 04:52:27.168669 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-03 04:52:27.168800 | orchestrator | b094b8d4e455 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-03 04:52:27.168822 | orchestrator | afb324b46b78 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-03 04:52:27.168832 | orchestrator | 255bd66dabb2 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-03 04:52:27.168841 | orchestrator | bd898fee57e2 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-03 04:52:27.168851 | orchestrator | 29cef6aaa416 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-03 04:52:27.168864 | orchestrator | eb13aef95898 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 59 minutes ago Up 59 minutes cephclient 2026-02-03 04:52:27.168874 | orchestrator | 559b296a1e0b registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-03 04:52:27.168883 | orchestrator | 0274b3db6ddb registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-03 04:52:27.168914 | orchestrator | 56f7a5c7cd4d registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-03 04:52:27.168924 | orchestrator | ed26245c89b3 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-03 04:52:27.168933 | orchestrator | 3b03a847cf96 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-03 04:52:27.168941 | orchestrator | 030b6f054b15 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-03 04:52:27.168951 | orchestrator | 1d4b57f5be6a registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-03 04:52:27.168960 | orchestrator | fd412ce84698 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-03 04:52:27.168999 | orchestrator | f8efa8fa57c8 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-03 04:52:27.169019 | orchestrator | 85ee7eaf706a registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-03 04:52:27.169028 | orchestrator | a90ea3e3d795 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-03 04:52:27.169037 | orchestrator | 2f014455268f registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-03 04:52:27.169046 | orchestrator | 561efb814188 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-03 04:52:27.169055 | orchestrator | 790a4ecf39b5 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-03 04:52:27.169065 | orchestrator | 40e4e0d3ee09 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-03 04:52:27.169074 | orchestrator | 5757904cd175 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-03 04:52:27.169090 | orchestrator | 81aeec863fdc registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-03 04:52:27.169099 | orchestrator | 8c258d775ada registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-03 04:52:27.169681 | orchestrator | cdd0da7d0685 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-03 04:52:27.169704 | orchestrator | 34ced2d74849 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-03 04:52:27.169714 | orchestrator | b364734e58ab registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-03 04:52:27.169722 | orchestrator | cd7f3f5effce registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-03 04:52:27.169731 | orchestrator | 50d5c7f64ea6 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-03 04:52:27.169747 | orchestrator | 6985e7193cba registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-03 04:52:27.567475 | orchestrator | 2026-02-03 04:52:27.567556 | orchestrator | ## Images @ testbed-manager 2026-02-03 04:52:27.567564 | orchestrator | 2026-02-03 04:52:27.567570 | orchestrator | + echo 2026-02-03 04:52:27.567575 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-03 04:52:27.567581 | orchestrator | + echo 2026-02-03 04:52:27.567589 | orchestrator | + osism container testbed-manager images 2026-02-03 04:52:30.099169 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-03 04:52:30.099291 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 a79be52cb0e0 25 hours ago 238MB 2026-02-03 04:52:30.099307 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 6 days ago 41.4MB 2026-02-03 04:52:30.099319 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-03 04:52:30.099331 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-03 04:52:30.099342 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-03 04:52:30.099353 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-03 04:52:30.099364 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-03 04:52:30.099378 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-03 04:52:30.099389 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-03 04:52:30.099429 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-03 04:52:30.099442 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-03 04:52:30.099453 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-03 04:52:30.099463 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-03 04:52:30.099474 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-03 04:52:30.099485 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-03 04:52:30.099496 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-03 04:52:30.099507 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-03 04:52:30.099518 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-03 04:52:30.099529 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 2 months ago 334MB 2026-02-03 04:52:30.099540 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-02-03 04:52:30.099551 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-03 04:52:30.099562 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-03 04:52:30.099572 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-03 04:52:30.099583 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-03 04:52:30.099594 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-03 04:52:30.491823 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-03 04:52:30.492589 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-03 04:52:30.545382 | orchestrator | 2026-02-03 04:52:30.545511 | orchestrator | ## Containers @ testbed-node-0 2026-02-03 04:52:30.545531 | orchestrator | 2026-02-03 04:52:30.545543 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-03 04:52:30.545555 | orchestrator | + echo 2026-02-03 04:52:30.545567 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-03 04:52:30.545579 | orchestrator | + echo 2026-02-03 04:52:30.545590 | orchestrator | + osism container testbed-node-0 ps 2026-02-03 04:52:33.152391 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-03 04:52:33.152507 | orchestrator | 570e91a420ad registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-03 04:52:33.152559 | orchestrator | 97bef8f48708 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-03 04:52:33.152581 | orchestrator | fc0bfb4057e3 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 7 minutes grafana 2026-02-03 04:52:33.152600 | orchestrator | 08bc02e2effa registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-03 04:52:33.152651 | orchestrator | e8dce5dc9060 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-03 04:52:33.152671 | orchestrator | b399bfd88ffb registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-03 04:52:33.152697 | orchestrator | 64e7f4f417b6 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-03 04:52:33.152709 | orchestrator | 64c81c2176d1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-03 04:52:33.152720 | orchestrator | ac895912fa02 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-03 04:52:33.152732 | orchestrator | 4006ad84e9c7 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-03 04:52:33.152744 | orchestrator | fd99d0003e60 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-03 04:52:33.152755 | orchestrator | 79a7a3a8ddbc registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-03 04:52:33.152766 | orchestrator | d3a96536b6db registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-03 04:52:33.152777 | orchestrator | 8af4902536f8 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-03 04:52:33.152787 | orchestrator | 79650e7066d0 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-03 04:52:33.152798 | orchestrator | 0659e7371e2d registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-03 04:52:33.152809 | orchestrator | 5bc943fc5829 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-03 04:52:33.152820 | orchestrator | f8ff6a82d110 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-03 04:52:33.152831 | orchestrator | f85c4df0d5f4 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-03 04:52:33.152870 | orchestrator | 3bb7f6c696fe registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-03 04:52:33.152882 | orchestrator | 9b82db32be55 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-03 04:52:33.152893 | orchestrator | a2b6bfcc134a registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-03 04:52:33.152912 | orchestrator | 2532f4e2b730 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-03 04:52:33.152923 | orchestrator | 1cb9a7e207e1 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-03 04:52:33.152934 | orchestrator | 5ecca7b05d9d registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-03 04:52:33.152951 | orchestrator | ece9fbce467c registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-03 04:52:33.152962 | orchestrator | 27d3a458cc62 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-03 04:52:33.152973 | orchestrator | e0652ecb6377 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-03 04:52:33.152984 | orchestrator | b9d372bdb276 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-03 04:52:33.152995 | orchestrator | 1f640259eecf registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-03 04:52:33.153006 | orchestrator | a6f963c4eac7 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-03 04:52:33.153017 | orchestrator | eb764aa7ff21 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-03 04:52:33.153028 | orchestrator | 6ca0f655784e registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-03 04:52:33.153039 | orchestrator | f900e8d94ad5 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-03 04:52:33.153072 | orchestrator | ae6b3947f4d6 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-02-03 04:52:33.153083 | orchestrator | db9c3657151e registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-03 04:52:33.153094 | orchestrator | 67c60b9dcbeb registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-03 04:52:33.153105 | orchestrator | 069a942c79e1 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-03 04:52:33.153116 | orchestrator | a1d5c841a312 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-03 04:52:33.153162 | orchestrator | 07ee3e347594 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-02-03 04:52:33.153188 | orchestrator | 9559ffbbd553 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-03 04:52:33.153207 | orchestrator | 02dad3a4ae51 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-02-03 04:52:33.153233 | orchestrator | fbbbbb979158 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-03 04:52:33.153250 | orchestrator | 6d1b70c76575 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-02-03 04:52:33.153267 | orchestrator | 04b5f1ae6bb8 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-02-03 04:52:33.153286 | orchestrator | 6fcf14b19457 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-02-03 04:52:33.153304 | orchestrator | 6746bd1493ed registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-03 04:52:33.153324 | orchestrator | 736fae173451 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-03 04:52:33.153342 | orchestrator | 458e6bf25f55 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-02-03 04:52:33.153362 | orchestrator | 71a1c3ebfd89 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-0 2026-02-03 04:52:33.153380 | orchestrator | 98f60046b823 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-03 04:52:33.153398 | orchestrator | f906be70bf4b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-03 04:52:33.153417 | orchestrator | 2067fe8668ad registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-03 04:52:33.153434 | orchestrator | 36a2367ea357 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-03 04:52:33.153450 | orchestrator | c0015a6f84dc registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-03 04:52:33.153470 | orchestrator | f766a77a9120 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-03 04:52:33.153497 | orchestrator | c5b98a9e560c registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-03 04:52:33.153517 | orchestrator | 9b2d51d5c598 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-03 04:52:33.153547 | orchestrator | 2250768b2522 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-03 04:52:33.153577 | orchestrator | 4a6b4c0819af registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-03 04:52:33.153597 | orchestrator | 19661f7eca21 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-03 04:52:33.153614 | orchestrator | e8acbd3a9a00 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-03 04:52:33.153631 | orchestrator | 22fe87af358a registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-03 04:52:33.153648 | orchestrator | 7ea20604e034 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-02-03 04:52:33.153664 | orchestrator | 01fd1556a098 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-02-03 04:52:33.153682 | orchestrator | 63b6d05704a0 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-03 04:52:33.153699 | orchestrator | aab205ea73fa registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-03 04:52:33.153716 | orchestrator | bba038685f5f registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-03 04:52:33.153734 | orchestrator | 32b09f06c4be registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-03 04:52:33.153752 | orchestrator | fd03799dba06 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-03 04:52:33.153769 | orchestrator | 6d16bc33a18f registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-03 04:52:33.556968 | orchestrator | 2026-02-03 04:52:33.557098 | orchestrator | ## Images @ testbed-node-0 2026-02-03 04:52:33.557116 | orchestrator | 2026-02-03 04:52:33.557160 | orchestrator | + echo 2026-02-03 04:52:33.557196 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-03 04:52:33.557218 | orchestrator | + echo 2026-02-03 04:52:33.557236 | orchestrator | + osism container testbed-node-0 images 2026-02-03 04:52:36.142692 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-03 04:52:36.142817 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-03 04:52:36.142834 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-03 04:52:36.142847 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-03 04:52:36.142859 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-03 04:52:36.142890 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-03 04:52:36.142902 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-03 04:52:36.142913 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-03 04:52:36.142924 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-03 04:52:36.142935 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-03 04:52:36.142947 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-03 04:52:36.142958 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-03 04:52:36.142969 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-03 04:52:36.142980 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-03 04:52:36.142991 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-03 04:52:36.143062 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-03 04:52:36.143074 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-03 04:52:36.143085 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-03 04:52:36.143096 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-03 04:52:36.143107 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-03 04:52:36.143118 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-03 04:52:36.143129 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-03 04:52:36.143166 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-03 04:52:36.143178 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-03 04:52:36.143190 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-03 04:52:36.143201 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-03 04:52:36.143212 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-03 04:52:36.143224 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-03 04:52:36.143244 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-03 04:52:36.143257 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-03 04:52:36.143270 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-03 04:52:36.143292 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-03 04:52:36.143324 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-03 04:52:36.143339 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-03 04:52:36.143351 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-03 04:52:36.143364 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-03 04:52:36.143377 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-03 04:52:36.143390 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-03 04:52:36.143402 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-03 04:52:36.143415 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-03 04:52:36.143428 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-03 04:52:36.143440 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-03 04:52:36.143453 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-03 04:52:36.143466 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-03 04:52:36.143479 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-03 04:52:36.143491 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-03 04:52:36.143504 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-03 04:52:36.143517 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-03 04:52:36.143530 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-03 04:52:36.143543 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-03 04:52:36.143555 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-03 04:52:36.143665 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-03 04:52:36.143680 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-03 04:52:36.143692 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-03 04:52:36.143703 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-03 04:52:36.143714 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-03 04:52:36.143725 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-03 04:52:36.143744 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-03 04:52:36.143755 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-03 04:52:36.143772 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-03 04:52:36.143783 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-03 04:52:36.143794 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-03 04:52:36.143806 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-03 04:52:36.143817 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-03 04:52:36.143828 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-03 04:52:36.143839 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-03 04:52:36.143850 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-03 04:52:36.143861 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-03 04:52:36.143872 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-03 04:52:36.143884 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-03 04:52:36.518365 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-03 04:52:36.519126 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-03 04:52:36.581580 | orchestrator | 2026-02-03 04:52:36.581664 | orchestrator | ## Containers @ testbed-node-1 2026-02-03 04:52:36.581684 | orchestrator | 2026-02-03 04:52:36.581696 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-03 04:52:36.581708 | orchestrator | + echo 2026-02-03 04:52:36.581719 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-03 04:52:36.581732 | orchestrator | + echo 2026-02-03 04:52:36.581743 | orchestrator | + osism container testbed-node-1 ps 2026-02-03 04:52:39.103659 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-03 04:52:39.103832 | orchestrator | 2a87b7fb7abd registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-03 04:52:39.103862 | orchestrator | 914452c9687c registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-03 04:52:39.104842 | orchestrator | 544a38d8bee3 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-03 04:52:39.104902 | orchestrator | 536034337b4a registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-03 04:52:39.104920 | orchestrator | 79d371131725 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-03 04:52:39.104933 | orchestrator | e5fd301eb215 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-03 04:52:39.104972 | orchestrator | 4c6f0140f9a6 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-03 04:52:39.104985 | orchestrator | 10b13d597c47 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-03 04:52:39.104997 | orchestrator | 9d6ac309ecbd registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-03 04:52:39.105007 | orchestrator | 294c5992c9d5 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-03 04:52:39.105017 | orchestrator | 7c4490638867 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-03 04:52:39.105028 | orchestrator | 761f8511458a registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-03 04:52:39.105061 | orchestrator | 45d25f344e71 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-03 04:52:39.105072 | orchestrator | 4700d60f872c registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-03 04:52:39.105082 | orchestrator | 92ba054306f6 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-03 04:52:39.105092 | orchestrator | 9323c1a1d801 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-03 04:52:39.105103 | orchestrator | b7fac75a0729 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-03 04:52:39.105113 | orchestrator | 995caf73b47b registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-03 04:52:39.105123 | orchestrator | 545e5ed51e89 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-03 04:52:39.105174 | orchestrator | c320d7501da7 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-03 04:52:39.105185 | orchestrator | a934ef24797b registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-03 04:52:39.105195 | orchestrator | f5b6ef37b186 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-03 04:52:39.105204 | orchestrator | 6908fa7baa2f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-03 04:52:39.105214 | orchestrator | 4a5939d94d0e registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-03 04:52:39.105232 | orchestrator | 44394cc39e58 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-03 04:52:39.105241 | orchestrator | 36563b293d21 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-03 04:52:39.105250 | orchestrator | 2e9a4e3741eb registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-03 04:52:39.105259 | orchestrator | ed50ee6c1fe0 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-03 04:52:39.105269 | orchestrator | 5430620ec05f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-03 04:52:39.105277 | orchestrator | 0e6f1cdf3a59 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-03 04:52:39.105287 | orchestrator | c33260d59c0c registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-03 04:52:39.105298 | orchestrator | 188ef33db09d registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-03 04:52:39.105307 | orchestrator | 21241782bd44 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-03 04:52:39.105317 | orchestrator | ccca6e3603f9 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-03 04:52:39.105327 | orchestrator | 82b9899cf257 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-02-03 04:52:39.105336 | orchestrator | fabab484dbba registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-03 04:52:39.105352 | orchestrator | fc6e7b556802 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-03 04:52:39.105363 | orchestrator | 663c8c388cf3 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-03 04:52:39.105372 | orchestrator | 948b3a7db3ba registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-03 04:52:39.105395 | orchestrator | ddda454825d5 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-03 04:52:39.105405 | orchestrator | a7863d6bf13f registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-03 04:52:39.105422 | orchestrator | 445dac2d2c95 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-02-03 04:52:39.105432 | orchestrator | 2f84f23e8bfa registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-03 04:52:39.105441 | orchestrator | f3adc57aded1 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-03 04:52:39.105450 | orchestrator | 7ebed8f17f3f registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-03 04:52:39.105460 | orchestrator | ff3ec6f4573c registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-02-03 04:52:39.105468 | orchestrator | 72aa9994176e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-03 04:52:39.105478 | orchestrator | 6db223a48e74 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-03 04:52:39.105487 | orchestrator | 702e46ee7b3a registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-02-03 04:52:39.105497 | orchestrator | a28472445b34 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-1 2026-02-03 04:52:39.105506 | orchestrator | bc801ca5677f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-03 04:52:39.105515 | orchestrator | 9e707d2df2a9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-03 04:52:39.105524 | orchestrator | 0cf1f60d5d3a registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-03 04:52:39.105533 | orchestrator | 6993c125d364 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-03 04:52:39.105543 | orchestrator | 5d8426139f14 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-03 04:52:39.105553 | orchestrator | ea2c6025c24c registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-03 04:52:39.105563 | orchestrator | cd75a7c69725 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-03 04:52:39.105573 | orchestrator | c1cfa37237f1 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-03 04:52:39.105586 | orchestrator | 33bab37d2bd9 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-03 04:52:39.105611 | orchestrator | 6028198644ce registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-03 04:52:39.105621 | orchestrator | dd95244aaaa7 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-03 04:52:39.105631 | orchestrator | 58a2e82f5560 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-03 04:52:39.105640 | orchestrator | 0a5a892e7d25 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-03 04:52:39.105650 | orchestrator | 11b345045c58 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-03 04:52:39.105666 | orchestrator | f4b0e4ef9336 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-02-03 04:52:39.105677 | orchestrator | 050c7bbf2627 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-03 04:52:39.105686 | orchestrator | 6387f999de73 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-03 04:52:39.105696 | orchestrator | ffcd8a429388 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-03 04:52:39.105707 | orchestrator | 550f95f24f7f registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-03 04:52:39.105720 | orchestrator | 62da1c947d7e registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-03 04:52:39.105730 | orchestrator | 4aa1c4adf11e registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-03 04:52:39.495214 | orchestrator | 2026-02-03 04:52:39.495335 | orchestrator | ## Images @ testbed-node-1 2026-02-03 04:52:39.495352 | orchestrator | 2026-02-03 04:52:39.495364 | orchestrator | + echo 2026-02-03 04:52:39.495376 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-03 04:52:39.495388 | orchestrator | + echo 2026-02-03 04:52:39.495399 | orchestrator | + osism container testbed-node-1 images 2026-02-03 04:52:42.102285 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-03 04:52:42.102365 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-03 04:52:42.102373 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-03 04:52:42.102379 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-03 04:52:42.102385 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-03 04:52:42.102391 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-03 04:52:42.102396 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-03 04:52:42.102415 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-03 04:52:42.102421 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-03 04:52:42.102426 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-03 04:52:42.102432 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-03 04:52:42.102437 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-03 04:52:42.102442 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-03 04:52:42.102447 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-03 04:52:42.102452 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-03 04:52:42.102457 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-03 04:52:42.102462 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-03 04:52:42.102467 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-03 04:52:42.102473 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-03 04:52:42.102478 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-03 04:52:42.102483 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-03 04:52:42.102488 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-03 04:52:42.102493 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-03 04:52:42.102499 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-03 04:52:42.102504 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-03 04:52:42.102509 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-03 04:52:42.102514 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-03 04:52:42.102519 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-03 04:52:42.102525 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-03 04:52:42.102530 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-03 04:52:42.102538 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-03 04:52:42.102547 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-03 04:52:42.102565 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-03 04:52:42.102575 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-03 04:52:42.102580 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-03 04:52:42.102585 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-03 04:52:42.102590 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-03 04:52:42.102596 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-03 04:52:42.102614 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-03 04:52:42.102619 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-03 04:52:42.102624 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-03 04:52:42.102629 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-03 04:52:42.102634 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-03 04:52:42.102639 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-03 04:52:42.102644 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-03 04:52:42.102649 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-03 04:52:42.102654 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-03 04:52:42.102659 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-03 04:52:42.102664 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-03 04:52:42.102670 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-03 04:52:42.102675 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-03 04:52:42.102680 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-03 04:52:42.102685 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-03 04:52:42.102690 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-03 04:52:42.102695 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-03 04:52:42.102700 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-03 04:52:42.102705 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-03 04:52:42.102710 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-03 04:52:42.102715 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-03 04:52:42.102720 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-03 04:52:42.102729 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-03 04:52:42.102734 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-03 04:52:42.102739 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-03 04:52:42.102744 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-03 04:52:42.102754 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-03 04:52:42.102759 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-03 04:52:42.102764 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-03 04:52:42.102769 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-03 04:52:42.102774 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-03 04:52:42.102779 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-03 04:52:42.498455 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-03 04:52:42.498816 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-03 04:52:42.568461 | orchestrator | 2026-02-03 04:52:42.568525 | orchestrator | ## Containers @ testbed-node-2 2026-02-03 04:52:42.568531 | orchestrator | 2026-02-03 04:52:42.568536 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-03 04:52:42.568541 | orchestrator | + echo 2026-02-03 04:52:42.568546 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-03 04:52:42.568552 | orchestrator | + echo 2026-02-03 04:52:42.568556 | orchestrator | + osism container testbed-node-2 ps 2026-02-03 04:52:45.224499 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-03 04:52:45.224620 | orchestrator | 7a4c37f6bb12 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-03 04:52:45.224639 | orchestrator | 2eea9ec40ae8 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-03 04:52:45.224652 | orchestrator | ec063d66ba5c registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-03 04:52:45.224723 | orchestrator | 1addb27d7cd1 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-03 04:52:45.224740 | orchestrator | d99d3cc2417a registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-03 04:52:45.224805 | orchestrator | 3efcf95eaaef registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-03 04:52:45.224821 | orchestrator | 3fc4554762e8 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-03 04:52:45.224833 | orchestrator | 76c59eab49b9 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-03 04:52:45.224870 | orchestrator | dd7ff94703a5 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-03 04:52:45.224882 | orchestrator | 200db8a5cb76 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-03 04:52:45.224893 | orchestrator | 651b2bd0d60f registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-03 04:52:45.224949 | orchestrator | ad80a29b6cbb registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-03 04:52:45.224988 | orchestrator | 33d8bc23e37a registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-03 04:52:45.225047 | orchestrator | 8ec8870ca294 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-03 04:52:45.225064 | orchestrator | 2ce28a422927 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-03 04:52:45.225299 | orchestrator | 6592b16e313c registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-03 04:52:45.225320 | orchestrator | d0d405666279 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-03 04:52:45.225332 | orchestrator | f1b5a61e2465 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-03 04:52:45.225343 | orchestrator | 1acf717182df registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-02-03 04:52:45.225354 | orchestrator | 2b3e40a11238 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-03 04:52:45.225365 | orchestrator | 95b36ad8b77e registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-03 04:52:45.225376 | orchestrator | 72067a7dddc1 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-03 04:52:45.225387 | orchestrator | cd8d66c9f026 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-03 04:52:45.225398 | orchestrator | ce2989efc1f6 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-02-03 04:52:45.225409 | orchestrator | 1d846aa01e25 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-03 04:52:45.225432 | orchestrator | a9b78ecf10b2 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-03 04:52:45.225444 | orchestrator | cc68a303c8d4 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-03 04:52:45.225455 | orchestrator | fc62f8d69d20 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-03 04:52:45.225466 | orchestrator | 38190d616505 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-03 04:52:45.225477 | orchestrator | 22f3af02789c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-03 04:52:45.225488 | orchestrator | a03c2b49b079 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-03 04:52:45.225499 | orchestrator | 9d7f08b5e916 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-02-03 04:52:45.225510 | orchestrator | ab5d2e2c3f13 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-03 04:52:45.225521 | orchestrator | 58811cc49689 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-03 04:52:45.225532 | orchestrator | 5a3bee0400c4 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-02-03 04:52:45.225554 | orchestrator | 68f8931f89a7 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-03 04:52:45.225566 | orchestrator | 33d7d8caa6a9 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-03 04:52:45.225577 | orchestrator | cb73ee148140 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-03 04:52:45.225588 | orchestrator | ddb3a4b430d2 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-03 04:52:45.225599 | orchestrator | 68372248fae5 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-03 04:52:45.225610 | orchestrator | 3331b9a43422 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-02-03 04:52:45.225621 | orchestrator | 82c5d1e48fa7 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-02-03 04:52:45.225632 | orchestrator | f25b690d4eec registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-03 04:52:45.225649 | orchestrator | 4f719b511e0b registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-03 04:52:45.225660 | orchestrator | 7c56982fa038 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-02-03 04:52:45.225671 | orchestrator | 71b6463fd98c registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-02-03 04:52:45.225682 | orchestrator | c9a03e703844 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-03 04:52:45.225693 | orchestrator | 21c921b23d28 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-03 04:52:45.225703 | orchestrator | 2239b4877b1b registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-02-03 04:52:45.225714 | orchestrator | 5cd860bce679 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-2 2026-02-03 04:52:45.225726 | orchestrator | 55026ae2ebac registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-03 04:52:45.225745 | orchestrator | 7edf8d69a692 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-03 04:52:45.225757 | orchestrator | 725b5ee6794d registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-03 04:52:45.225783 | orchestrator | 5411cc0728d9 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-03 04:52:45.225803 | orchestrator | be82e2dc6746 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-03 04:52:45.225829 | orchestrator | eef3495a2ceb registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-03 04:52:45.225842 | orchestrator | c72a2adc6f21 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-03 04:52:45.225853 | orchestrator | 7712fe62a5d9 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-03 04:52:45.225864 | orchestrator | 001086a849c5 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-03 04:52:45.225875 | orchestrator | 5a7ac6da1c86 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-03 04:52:45.225886 | orchestrator | d875c7c336fd registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-03 04:52:45.225905 | orchestrator | 9d98fa6d24b0 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-03 04:52:45.225919 | orchestrator | 4de59a4e3cce registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-03 04:52:45.225932 | orchestrator | e702c23d118e registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-03 04:52:45.225945 | orchestrator | 4cc6ac03e8b8 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-02-03 04:52:45.225958 | orchestrator | 5d29080fa36c registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-03 04:52:45.225971 | orchestrator | 7594f46baf5e registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-03 04:52:45.225984 | orchestrator | a33eaed7de5c registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-03 04:52:45.225998 | orchestrator | 8c153639f26a registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-03 04:52:45.226010 | orchestrator | ffb8e98118bd registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-03 04:52:45.226079 | orchestrator | c8c358f9293b registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-03 04:52:45.665447 | orchestrator | 2026-02-03 04:52:45.665543 | orchestrator | ## Images @ testbed-node-2 2026-02-03 04:52:45.665560 | orchestrator | 2026-02-03 04:52:45.665572 | orchestrator | + echo 2026-02-03 04:52:45.665584 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-03 04:52:45.665596 | orchestrator | + echo 2026-02-03 04:52:45.665608 | orchestrator | + osism container testbed-node-2 images 2026-02-03 04:52:48.292823 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-03 04:52:48.292932 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-03 04:52:48.292957 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-03 04:52:48.292975 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-03 04:52:48.293014 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-03 04:52:48.293034 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-03 04:52:48.293051 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-03 04:52:48.293063 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-03 04:52:48.293081 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-03 04:52:48.293128 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-03 04:52:48.293206 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-03 04:52:48.293235 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-03 04:52:48.293255 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-03 04:52:48.293274 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-03 04:52:48.293293 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-03 04:52:48.293313 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-03 04:52:48.293331 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-03 04:52:48.293345 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-03 04:52:48.293358 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-03 04:52:48.293372 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-03 04:52:48.293384 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-03 04:52:48.293397 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-03 04:52:48.293410 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-03 04:52:48.293424 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-03 04:52:48.293437 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-03 04:52:48.293450 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-03 04:52:48.293463 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-03 04:52:48.293476 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-03 04:52:48.293489 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-03 04:52:48.293502 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-03 04:52:48.293514 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-03 04:52:48.293528 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-03 04:52:48.293561 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-03 04:52:48.293576 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-03 04:52:48.293588 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-03 04:52:48.293601 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-03 04:52:48.293626 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-03 04:52:48.293639 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-03 04:52:48.293652 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-03 04:52:48.293675 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-03 04:52:48.293688 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-03 04:52:48.293702 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-03 04:52:48.293714 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-03 04:52:48.293725 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-03 04:52:48.293736 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-03 04:52:48.293747 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-03 04:52:48.293758 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-03 04:52:48.293769 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-03 04:52:48.293780 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-03 04:52:48.293791 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-03 04:52:48.293802 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-03 04:52:48.293813 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-03 04:52:48.293832 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-03 04:52:48.293850 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-03 04:52:48.293870 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-03 04:52:48.293888 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-03 04:52:48.293903 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-03 04:52:48.293914 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-03 04:52:48.293925 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-03 04:52:48.293936 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-03 04:52:48.293947 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-03 04:52:48.293958 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-03 04:52:48.293977 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-03 04:52:48.293988 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-03 04:52:48.294008 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-03 04:52:48.294116 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-03 04:52:48.294129 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-03 04:52:48.294140 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-03 04:52:48.294202 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-03 04:52:48.294215 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-03 04:52:48.686720 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-03 04:52:48.694312 | orchestrator | + set -e 2026-02-03 04:52:48.694375 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 04:52:48.694385 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 04:52:48.694392 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 04:52:48.694399 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 04:52:48.694405 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 04:52:48.694412 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 04:52:48.694421 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 04:52:48.694428 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 04:52:48.694435 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 04:52:48.694442 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 04:52:48.694449 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 04:52:48.694455 | orchestrator | ++ export ARA=false 2026-02-03 04:52:48.694463 | orchestrator | ++ ARA=false 2026-02-03 04:52:48.694469 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 04:52:48.694476 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 04:52:48.694483 | orchestrator | ++ export TEMPEST=false 2026-02-03 04:52:48.694491 | orchestrator | ++ TEMPEST=false 2026-02-03 04:52:48.694498 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 04:52:48.694504 | orchestrator | ++ IS_ZUUL=true 2026-02-03 04:52:48.694511 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 04:52:48.694518 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 04:52:48.694526 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 04:52:48.694533 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 04:52:48.694540 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 04:52:48.694547 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 04:52:48.694555 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 04:52:48.694562 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 04:52:48.694569 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 04:52:48.694576 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 04:52:48.694585 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-03 04:52:48.694592 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-03 04:52:48.700754 | orchestrator | + set -e 2026-02-03 04:52:48.700789 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 04:52:48.700795 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 04:52:48.700801 | orchestrator | ++ INTERACTIVE=false 2026-02-03 04:52:48.700805 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 04:52:48.700810 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 04:52:48.700814 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-03 04:52:48.701699 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-03 04:52:48.704808 | orchestrator | 2026-02-03 04:52:48.704835 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 04:52:48.704840 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 04:52:48.704844 | orchestrator | + echo 2026-02-03 04:52:48.704849 | orchestrator | # Ceph status 2026-02-03 04:52:48.704854 | orchestrator | 2026-02-03 04:52:48.704858 | orchestrator | + echo '# Ceph status' 2026-02-03 04:52:48.704883 | orchestrator | + echo 2026-02-03 04:52:48.704888 | orchestrator | + ceph -s 2026-02-03 04:52:49.344944 | orchestrator | cluster: 2026-02-03 04:52:49.345025 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-03 04:52:49.345036 | orchestrator | health: HEALTH_OK 2026-02-03 04:52:49.345044 | orchestrator | 2026-02-03 04:52:49.345051 | orchestrator | services: 2026-02-03 04:52:49.345059 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 70m) 2026-02-03 04:52:49.345067 | orchestrator | mgr: testbed-node-0(active, since 58m), standbys: testbed-node-2, testbed-node-1 2026-02-03 04:52:49.345076 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-03 04:52:49.345083 | orchestrator | osd: 6 osds: 6 up (since 66m), 6 in (since 67m) 2026-02-03 04:52:49.345091 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-03 04:52:49.345097 | orchestrator | 2026-02-03 04:52:49.345105 | orchestrator | data: 2026-02-03 04:52:49.345112 | orchestrator | volumes: 1/1 healthy 2026-02-03 04:52:49.345119 | orchestrator | pools: 14 pools, 401 pgs 2026-02-03 04:52:49.345126 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-03 04:52:49.345133 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-02-03 04:52:49.345140 | orchestrator | pgs: 401 active+clean 2026-02-03 04:52:49.345181 | orchestrator | 2026-02-03 04:52:49.402381 | orchestrator | 2026-02-03 04:52:49.402473 | orchestrator | # Ceph versions 2026-02-03 04:52:49.402488 | orchestrator | 2026-02-03 04:52:49.402500 | orchestrator | + echo 2026-02-03 04:52:49.402511 | orchestrator | + echo '# Ceph versions' 2026-02-03 04:52:49.402523 | orchestrator | + echo 2026-02-03 04:52:49.402534 | orchestrator | + ceph versions 2026-02-03 04:52:50.010354 | orchestrator | { 2026-02-03 04:52:50.010482 | orchestrator | "mon": { 2026-02-03 04:52:50.010500 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-03 04:52:50.010514 | orchestrator | }, 2026-02-03 04:52:50.010525 | orchestrator | "mgr": { 2026-02-03 04:52:50.010537 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-03 04:52:50.010548 | orchestrator | }, 2026-02-03 04:52:50.010559 | orchestrator | "osd": { 2026-02-03 04:52:50.010570 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-03 04:52:50.010581 | orchestrator | }, 2026-02-03 04:52:50.010592 | orchestrator | "mds": { 2026-02-03 04:52:50.010603 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-03 04:52:50.010614 | orchestrator | }, 2026-02-03 04:52:50.010625 | orchestrator | "rgw": { 2026-02-03 04:52:50.010636 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-03 04:52:50.010647 | orchestrator | }, 2026-02-03 04:52:50.010658 | orchestrator | "overall": { 2026-02-03 04:52:50.010669 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-03 04:52:50.010680 | orchestrator | } 2026-02-03 04:52:50.010691 | orchestrator | } 2026-02-03 04:52:50.067774 | orchestrator | 2026-02-03 04:52:50.067867 | orchestrator | # Ceph OSD tree 2026-02-03 04:52:50.067880 | orchestrator | 2026-02-03 04:52:50.067892 | orchestrator | + echo 2026-02-03 04:52:50.067902 | orchestrator | + echo '# Ceph OSD tree' 2026-02-03 04:52:50.067913 | orchestrator | + echo 2026-02-03 04:52:50.067924 | orchestrator | + ceph osd df tree 2026-02-03 04:52:50.635570 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-03 04:52:50.635665 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 369 MiB 113 GiB 5.87 1.00 - root default 2026-02-03 04:52:50.635675 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-02-03 04:52:50.635683 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 62 MiB 19 GiB 5.32 0.91 189 up osd.0 2026-02-03 04:52:50.635690 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.41 1.09 201 up osd.3 2026-02-03 04:52:50.635697 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-02-03 04:52:50.635703 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.18 1.05 192 up osd.1 2026-02-03 04:52:50.635730 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.56 0.95 196 up osd.4 2026-02-03 04:52:50.635738 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-5 2026-02-03 04:52:50.635746 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 62 MiB 19 GiB 7.12 1.21 206 up osd.2 2026-02-03 04:52:50.635754 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 944 MiB 883 MiB 1 KiB 62 MiB 19 GiB 4.62 0.79 186 up osd.5 2026-02-03 04:52:50.635762 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 369 MiB 113 GiB 5.87 2026-02-03 04:52:50.635775 | orchestrator | MIN/MAX VAR: 0.79/1.21 STDDEV: 0.81 2026-02-03 04:52:50.680760 | orchestrator | 2026-02-03 04:52:50.680871 | orchestrator | # Ceph monitor status 2026-02-03 04:52:50.680896 | orchestrator | 2026-02-03 04:52:50.680911 | orchestrator | + echo 2026-02-03 04:52:50.680948 | orchestrator | + echo '# Ceph monitor status' 2026-02-03 04:52:50.680961 | orchestrator | + echo 2026-02-03 04:52:50.680972 | orchestrator | + ceph mon stat 2026-02-03 04:52:51.320464 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-03 04:52:51.374943 | orchestrator | 2026-02-03 04:52:51.375030 | orchestrator | # Ceph quorum status 2026-02-03 04:52:51.375044 | orchestrator | 2026-02-03 04:52:51.375055 | orchestrator | + echo 2026-02-03 04:52:51.375071 | orchestrator | + echo '# Ceph quorum status' 2026-02-03 04:52:51.375087 | orchestrator | + echo 2026-02-03 04:52:51.375104 | orchestrator | + ceph quorum_status 2026-02-03 04:52:51.375120 | orchestrator | + jq 2026-02-03 04:52:52.061705 | orchestrator | { 2026-02-03 04:52:52.061805 | orchestrator | "election_epoch": 8, 2026-02-03 04:52:52.061827 | orchestrator | "quorum": [ 2026-02-03 04:52:52.061847 | orchestrator | 0, 2026-02-03 04:52:52.061866 | orchestrator | 1, 2026-02-03 04:52:52.061884 | orchestrator | 2 2026-02-03 04:52:52.061903 | orchestrator | ], 2026-02-03 04:52:52.061921 | orchestrator | "quorum_names": [ 2026-02-03 04:52:52.061941 | orchestrator | "testbed-node-0", 2026-02-03 04:52:52.061960 | orchestrator | "testbed-node-1", 2026-02-03 04:52:52.061976 | orchestrator | "testbed-node-2" 2026-02-03 04:52:52.061987 | orchestrator | ], 2026-02-03 04:52:52.061999 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-03 04:52:52.062011 | orchestrator | "quorum_age": 4250, 2026-02-03 04:52:52.062081 | orchestrator | "features": { 2026-02-03 04:52:52.062093 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-03 04:52:52.062104 | orchestrator | "quorum_mon": [ 2026-02-03 04:52:52.062115 | orchestrator | "kraken", 2026-02-03 04:52:52.062126 | orchestrator | "luminous", 2026-02-03 04:52:52.062138 | orchestrator | "mimic", 2026-02-03 04:52:52.062181 | orchestrator | "osdmap-prune", 2026-02-03 04:52:52.062204 | orchestrator | "nautilus", 2026-02-03 04:52:52.062216 | orchestrator | "octopus", 2026-02-03 04:52:52.062226 | orchestrator | "pacific", 2026-02-03 04:52:52.062238 | orchestrator | "elector-pinging", 2026-02-03 04:52:52.062249 | orchestrator | "quincy", 2026-02-03 04:52:52.062260 | orchestrator | "reef" 2026-02-03 04:52:52.062271 | orchestrator | ] 2026-02-03 04:52:52.062284 | orchestrator | }, 2026-02-03 04:52:52.062298 | orchestrator | "monmap": { 2026-02-03 04:52:52.062311 | orchestrator | "epoch": 1, 2026-02-03 04:52:52.062324 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-03 04:52:52.062338 | orchestrator | "modified": "2026-02-03T03:41:44.550331Z", 2026-02-03 04:52:52.062350 | orchestrator | "created": "2026-02-03T03:41:44.550331Z", 2026-02-03 04:52:52.062361 | orchestrator | "min_mon_release": 18, 2026-02-03 04:52:52.062372 | orchestrator | "min_mon_release_name": "reef", 2026-02-03 04:52:52.062383 | orchestrator | "election_strategy": 1, 2026-02-03 04:52:52.062394 | orchestrator | "disallowed_leaders: ": "", 2026-02-03 04:52:52.062405 | orchestrator | "stretch_mode": false, 2026-02-03 04:52:52.062416 | orchestrator | "tiebreaker_mon": "", 2026-02-03 04:52:52.062427 | orchestrator | "removed_ranks: ": "", 2026-02-03 04:52:52.062437 | orchestrator | "features": { 2026-02-03 04:52:52.062448 | orchestrator | "persistent": [ 2026-02-03 04:52:52.062459 | orchestrator | "kraken", 2026-02-03 04:52:52.062497 | orchestrator | "luminous", 2026-02-03 04:52:52.062509 | orchestrator | "mimic", 2026-02-03 04:52:52.062520 | orchestrator | "osdmap-prune", 2026-02-03 04:52:52.062531 | orchestrator | "nautilus", 2026-02-03 04:52:52.062541 | orchestrator | "octopus", 2026-02-03 04:52:52.062552 | orchestrator | "pacific", 2026-02-03 04:52:52.062563 | orchestrator | "elector-pinging", 2026-02-03 04:52:52.062574 | orchestrator | "quincy", 2026-02-03 04:52:52.062585 | orchestrator | "reef" 2026-02-03 04:52:52.062596 | orchestrator | ], 2026-02-03 04:52:52.062606 | orchestrator | "optional": [] 2026-02-03 04:52:52.062617 | orchestrator | }, 2026-02-03 04:52:52.062628 | orchestrator | "mons": [ 2026-02-03 04:52:52.062639 | orchestrator | { 2026-02-03 04:52:52.062667 | orchestrator | "rank": 0, 2026-02-03 04:52:52.062679 | orchestrator | "name": "testbed-node-0", 2026-02-03 04:52:52.062690 | orchestrator | "public_addrs": { 2026-02-03 04:52:52.062701 | orchestrator | "addrvec": [ 2026-02-03 04:52:52.062712 | orchestrator | { 2026-02-03 04:52:52.062724 | orchestrator | "type": "v2", 2026-02-03 04:52:52.062735 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-03 04:52:52.062746 | orchestrator | "nonce": 0 2026-02-03 04:52:52.062757 | orchestrator | }, 2026-02-03 04:52:52.062768 | orchestrator | { 2026-02-03 04:52:52.062779 | orchestrator | "type": "v1", 2026-02-03 04:52:52.062790 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-03 04:52:52.062801 | orchestrator | "nonce": 0 2026-02-03 04:52:52.062812 | orchestrator | } 2026-02-03 04:52:52.062823 | orchestrator | ] 2026-02-03 04:52:52.062834 | orchestrator | }, 2026-02-03 04:52:52.062846 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-03 04:52:52.062857 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-03 04:52:52.062868 | orchestrator | "priority": 0, 2026-02-03 04:52:52.062879 | orchestrator | "weight": 0, 2026-02-03 04:52:52.062889 | orchestrator | "crush_location": "{}" 2026-02-03 04:52:52.062900 | orchestrator | }, 2026-02-03 04:52:52.062911 | orchestrator | { 2026-02-03 04:52:52.062922 | orchestrator | "rank": 1, 2026-02-03 04:52:52.062933 | orchestrator | "name": "testbed-node-1", 2026-02-03 04:52:52.062944 | orchestrator | "public_addrs": { 2026-02-03 04:52:52.062955 | orchestrator | "addrvec": [ 2026-02-03 04:52:52.062966 | orchestrator | { 2026-02-03 04:52:52.062976 | orchestrator | "type": "v2", 2026-02-03 04:52:52.062987 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-03 04:52:52.063006 | orchestrator | "nonce": 0 2026-02-03 04:52:52.063025 | orchestrator | }, 2026-02-03 04:52:52.063045 | orchestrator | { 2026-02-03 04:52:52.063064 | orchestrator | "type": "v1", 2026-02-03 04:52:52.063079 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-03 04:52:52.063090 | orchestrator | "nonce": 0 2026-02-03 04:52:52.063101 | orchestrator | } 2026-02-03 04:52:52.063112 | orchestrator | ] 2026-02-03 04:52:52.063123 | orchestrator | }, 2026-02-03 04:52:52.063134 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-03 04:52:52.063145 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-03 04:52:52.063198 | orchestrator | "priority": 0, 2026-02-03 04:52:52.063210 | orchestrator | "weight": 0, 2026-02-03 04:52:52.063220 | orchestrator | "crush_location": "{}" 2026-02-03 04:52:52.063231 | orchestrator | }, 2026-02-03 04:52:52.063242 | orchestrator | { 2026-02-03 04:52:52.063253 | orchestrator | "rank": 2, 2026-02-03 04:52:52.063264 | orchestrator | "name": "testbed-node-2", 2026-02-03 04:52:52.063275 | orchestrator | "public_addrs": { 2026-02-03 04:52:52.063286 | orchestrator | "addrvec": [ 2026-02-03 04:52:52.063297 | orchestrator | { 2026-02-03 04:52:52.063308 | orchestrator | "type": "v2", 2026-02-03 04:52:52.063318 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-03 04:52:52.063329 | orchestrator | "nonce": 0 2026-02-03 04:52:52.063340 | orchestrator | }, 2026-02-03 04:52:52.063351 | orchestrator | { 2026-02-03 04:52:52.063362 | orchestrator | "type": "v1", 2026-02-03 04:52:52.063373 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-03 04:52:52.063384 | orchestrator | "nonce": 0 2026-02-03 04:52:52.063395 | orchestrator | } 2026-02-03 04:52:52.063406 | orchestrator | ] 2026-02-03 04:52:52.063417 | orchestrator | }, 2026-02-03 04:52:52.063428 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-03 04:52:52.063439 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-03 04:52:52.063450 | orchestrator | "priority": 0, 2026-02-03 04:52:52.063469 | orchestrator | "weight": 0, 2026-02-03 04:52:52.063480 | orchestrator | "crush_location": "{}" 2026-02-03 04:52:52.063491 | orchestrator | } 2026-02-03 04:52:52.063502 | orchestrator | ] 2026-02-03 04:52:52.063513 | orchestrator | } 2026-02-03 04:52:52.063524 | orchestrator | } 2026-02-03 04:52:52.063536 | orchestrator | 2026-02-03 04:52:52.063547 | orchestrator | + echo 2026-02-03 04:52:52.063558 | orchestrator | # Ceph free space status 2026-02-03 04:52:52.063569 | orchestrator | + echo '# Ceph free space status' 2026-02-03 04:52:52.063580 | orchestrator | + echo 2026-02-03 04:52:52.063591 | orchestrator | 2026-02-03 04:52:52.063602 | orchestrator | + ceph df 2026-02-03 04:52:52.637615 | orchestrator | --- RAW STORAGE --- 2026-02-03 04:52:52.637706 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-03 04:52:52.637730 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-02-03 04:52:52.637751 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-02-03 04:52:52.637761 | orchestrator | 2026-02-03 04:52:52.637771 | orchestrator | --- POOLS --- 2026-02-03 04:52:52.637781 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-03 04:52:52.637791 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-02-03 04:52:52.637800 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-03 04:52:52.637809 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-03 04:52:52.637818 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-03 04:52:52.637827 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-03 04:52:52.637836 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-03 04:52:52.637845 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-03 04:52:52.637854 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-03 04:52:52.637863 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-02-03 04:52:52.637872 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-03 04:52:52.637881 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-03 04:52:52.637890 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-02-03 04:52:52.637898 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-03 04:52:52.637907 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-03 04:52:52.695120 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-03 04:52:52.755042 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-03 04:52:52.755133 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-03 04:52:52.755149 | orchestrator | + osism apply facts 2026-02-03 04:52:55.075439 | orchestrator | 2026-02-03 04:52:55 | INFO  | Task e6ea4dbe-0b5f-4f9e-8415-2cd6252564a2 (facts) was prepared for execution. 2026-02-03 04:52:55.075546 | orchestrator | 2026-02-03 04:52:55 | INFO  | It takes a moment until task e6ea4dbe-0b5f-4f9e-8415-2cd6252564a2 (facts) has been started and output is visible here. 2026-02-03 04:53:10.313235 | orchestrator | 2026-02-03 04:53:10.313364 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-03 04:53:10.313390 | orchestrator | 2026-02-03 04:53:10.313412 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-03 04:53:10.313433 | orchestrator | Tuesday 03 February 2026 04:52:59 +0000 (0:00:00.307) 0:00:00.307 ****** 2026-02-03 04:53:10.313452 | orchestrator | ok: [testbed-manager] 2026-02-03 04:53:10.313474 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:10.313485 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:53:10.313497 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:53:10.313508 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:53:10.313519 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:53:10.313530 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:53:10.313541 | orchestrator | 2026-02-03 04:53:10.313552 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-03 04:53:10.313591 | orchestrator | Tuesday 03 February 2026 04:53:01 +0000 (0:00:01.399) 0:00:01.707 ****** 2026-02-03 04:53:10.313603 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:53:10.313614 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:10.313625 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:53:10.313636 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:53:10.313647 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:53:10.313658 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:53:10.313669 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:53:10.313679 | orchestrator | 2026-02-03 04:53:10.313690 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-03 04:53:10.313701 | orchestrator | 2026-02-03 04:53:10.313712 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-03 04:53:10.313723 | orchestrator | Tuesday 03 February 2026 04:53:03 +0000 (0:00:01.647) 0:00:03.354 ****** 2026-02-03 04:53:10.313735 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:53:10.313748 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:53:10.313761 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:10.313775 | orchestrator | ok: [testbed-manager] 2026-02-03 04:53:10.313787 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:53:10.313800 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:53:10.313815 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:53:10.313828 | orchestrator | 2026-02-03 04:53:10.313841 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-03 04:53:10.313854 | orchestrator | 2026-02-03 04:53:10.313867 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-03 04:53:10.313879 | orchestrator | Tuesday 03 February 2026 04:53:09 +0000 (0:00:06.069) 0:00:09.423 ****** 2026-02-03 04:53:10.313890 | orchestrator | skipping: [testbed-manager] 2026-02-03 04:53:10.313901 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:10.313912 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:53:10.313923 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:53:10.313934 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:53:10.313945 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:53:10.313956 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:53:10.313966 | orchestrator | 2026-02-03 04:53:10.313977 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:53:10.313988 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:53:10.314000 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:53:10.314011 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:53:10.314096 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:53:10.314108 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:53:10.314119 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:53:10.314130 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:53:10.314140 | orchestrator | 2026-02-03 04:53:10.314151 | orchestrator | 2026-02-03 04:53:10.314162 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:53:10.314197 | orchestrator | Tuesday 03 February 2026 04:53:09 +0000 (0:00:00.650) 0:00:10.074 ****** 2026-02-03 04:53:10.314208 | orchestrator | =============================================================================== 2026-02-03 04:53:10.314219 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.07s 2026-02-03 04:53:10.314240 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.65s 2026-02-03 04:53:10.314251 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.40s 2026-02-03 04:53:10.314262 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2026-02-03 04:53:10.753620 | orchestrator | + osism validate ceph-mons 2026-02-03 04:53:43.059535 | orchestrator | 2026-02-03 04:53:43.059647 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-03 04:53:43.059664 | orchestrator | 2026-02-03 04:53:43.059676 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-03 04:53:43.059689 | orchestrator | Tuesday 03 February 2026 04:53:27 +0000 (0:00:00.454) 0:00:00.454 ****** 2026-02-03 04:53:43.059701 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:53:43.059712 | orchestrator | 2026-02-03 04:53:43.059724 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-03 04:53:43.059735 | orchestrator | Tuesday 03 February 2026 04:53:28 +0000 (0:00:00.754) 0:00:01.209 ****** 2026-02-03 04:53:43.059746 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:53:43.059757 | orchestrator | 2026-02-03 04:53:43.059768 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-03 04:53:43.059779 | orchestrator | Tuesday 03 February 2026 04:53:29 +0000 (0:00:00.856) 0:00:02.065 ****** 2026-02-03 04:53:43.059790 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.059802 | orchestrator | 2026-02-03 04:53:43.059813 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-03 04:53:43.059824 | orchestrator | Tuesday 03 February 2026 04:53:29 +0000 (0:00:00.101) 0:00:02.167 ****** 2026-02-03 04:53:43.059835 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.059846 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:53:43.059857 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:53:43.059868 | orchestrator | 2026-02-03 04:53:43.059879 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-03 04:53:43.059890 | orchestrator | Tuesday 03 February 2026 04:53:29 +0000 (0:00:00.277) 0:00:02.444 ****** 2026-02-03 04:53:43.059901 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:53:43.059912 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:53:43.059923 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.059935 | orchestrator | 2026-02-03 04:53:43.059946 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-03 04:53:43.059956 | orchestrator | Tuesday 03 February 2026 04:53:30 +0000 (0:00:00.912) 0:00:03.356 ****** 2026-02-03 04:53:43.059967 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.059979 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:53:43.059990 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:53:43.060000 | orchestrator | 2026-02-03 04:53:43.060012 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-03 04:53:43.060023 | orchestrator | Tuesday 03 February 2026 04:53:30 +0000 (0:00:00.274) 0:00:03.631 ****** 2026-02-03 04:53:43.060033 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.060045 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:53:43.060058 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:53:43.060072 | orchestrator | 2026-02-03 04:53:43.060085 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-03 04:53:43.060097 | orchestrator | Tuesday 03 February 2026 04:53:31 +0000 (0:00:00.428) 0:00:04.060 ****** 2026-02-03 04:53:43.060110 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.060123 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:53:43.060136 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:53:43.060149 | orchestrator | 2026-02-03 04:53:43.060162 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-03 04:53:43.060175 | orchestrator | Tuesday 03 February 2026 04:53:31 +0000 (0:00:00.293) 0:00:04.353 ****** 2026-02-03 04:53:43.060188 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.060256 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:53:43.060272 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:53:43.060285 | orchestrator | 2026-02-03 04:53:43.060298 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-03 04:53:43.060311 | orchestrator | Tuesday 03 February 2026 04:53:31 +0000 (0:00:00.283) 0:00:04.637 ****** 2026-02-03 04:53:43.060324 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.060337 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:53:43.060350 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:53:43.060364 | orchestrator | 2026-02-03 04:53:43.060377 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-03 04:53:43.060391 | orchestrator | Tuesday 03 February 2026 04:53:32 +0000 (0:00:00.416) 0:00:05.054 ****** 2026-02-03 04:53:43.060404 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.060415 | orchestrator | 2026-02-03 04:53:43.060426 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-03 04:53:43.060437 | orchestrator | Tuesday 03 February 2026 04:53:32 +0000 (0:00:00.235) 0:00:05.289 ****** 2026-02-03 04:53:43.060447 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.060458 | orchestrator | 2026-02-03 04:53:43.060469 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-03 04:53:43.060480 | orchestrator | Tuesday 03 February 2026 04:53:32 +0000 (0:00:00.251) 0:00:05.541 ****** 2026-02-03 04:53:43.060491 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.060501 | orchestrator | 2026-02-03 04:53:43.060512 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:53:43.060523 | orchestrator | Tuesday 03 February 2026 04:53:33 +0000 (0:00:00.272) 0:00:05.813 ****** 2026-02-03 04:53:43.060581 | orchestrator | 2026-02-03 04:53:43.060593 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:53:43.060603 | orchestrator | Tuesday 03 February 2026 04:53:33 +0000 (0:00:00.070) 0:00:05.884 ****** 2026-02-03 04:53:43.060614 | orchestrator | 2026-02-03 04:53:43.060625 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:53:43.060636 | orchestrator | Tuesday 03 February 2026 04:53:33 +0000 (0:00:00.069) 0:00:05.953 ****** 2026-02-03 04:53:43.060646 | orchestrator | 2026-02-03 04:53:43.060657 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-03 04:53:43.060668 | orchestrator | Tuesday 03 February 2026 04:53:33 +0000 (0:00:00.069) 0:00:06.023 ****** 2026-02-03 04:53:43.060679 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.060689 | orchestrator | 2026-02-03 04:53:43.060700 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-03 04:53:43.060727 | orchestrator | Tuesday 03 February 2026 04:53:33 +0000 (0:00:00.245) 0:00:06.269 ****** 2026-02-03 04:53:43.060739 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.060750 | orchestrator | 2026-02-03 04:53:43.060780 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-03 04:53:43.060792 | orchestrator | Tuesday 03 February 2026 04:53:33 +0000 (0:00:00.229) 0:00:06.498 ****** 2026-02-03 04:53:43.060803 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.060813 | orchestrator | 2026-02-03 04:53:43.060824 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-03 04:53:43.060835 | orchestrator | Tuesday 03 February 2026 04:53:33 +0000 (0:00:00.125) 0:00:06.623 ****** 2026-02-03 04:53:43.060846 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:53:43.060862 | orchestrator | 2026-02-03 04:53:43.060873 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-03 04:53:43.060884 | orchestrator | Tuesday 03 February 2026 04:53:35 +0000 (0:00:01.456) 0:00:08.079 ****** 2026-02-03 04:53:43.060895 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.060905 | orchestrator | 2026-02-03 04:53:43.060916 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-03 04:53:43.060927 | orchestrator | Tuesday 03 February 2026 04:53:35 +0000 (0:00:00.520) 0:00:08.600 ****** 2026-02-03 04:53:43.060938 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.060958 | orchestrator | 2026-02-03 04:53:43.060970 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-03 04:53:43.060980 | orchestrator | Tuesday 03 February 2026 04:53:36 +0000 (0:00:00.132) 0:00:08.733 ****** 2026-02-03 04:53:43.060991 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.061002 | orchestrator | 2026-02-03 04:53:43.061013 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-03 04:53:43.061024 | orchestrator | Tuesday 03 February 2026 04:53:36 +0000 (0:00:00.338) 0:00:09.072 ****** 2026-02-03 04:53:43.061034 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.061045 | orchestrator | 2026-02-03 04:53:43.061056 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-03 04:53:43.061067 | orchestrator | Tuesday 03 February 2026 04:53:36 +0000 (0:00:00.343) 0:00:09.415 ****** 2026-02-03 04:53:43.061077 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.061088 | orchestrator | 2026-02-03 04:53:43.061099 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-03 04:53:43.061110 | orchestrator | Tuesday 03 February 2026 04:53:36 +0000 (0:00:00.170) 0:00:09.585 ****** 2026-02-03 04:53:43.061121 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.061131 | orchestrator | 2026-02-03 04:53:43.061142 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-03 04:53:43.061153 | orchestrator | Tuesday 03 February 2026 04:53:37 +0000 (0:00:00.134) 0:00:09.719 ****** 2026-02-03 04:53:43.061164 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.061174 | orchestrator | 2026-02-03 04:53:43.061185 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-03 04:53:43.061196 | orchestrator | Tuesday 03 February 2026 04:53:37 +0000 (0:00:00.127) 0:00:09.847 ****** 2026-02-03 04:53:43.061248 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:53:43.061262 | orchestrator | 2026-02-03 04:53:43.061273 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-03 04:53:43.061284 | orchestrator | Tuesday 03 February 2026 04:53:38 +0000 (0:00:01.391) 0:00:11.238 ****** 2026-02-03 04:53:43.061294 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.061305 | orchestrator | 2026-02-03 04:53:43.061316 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-03 04:53:43.061327 | orchestrator | Tuesday 03 February 2026 04:53:38 +0000 (0:00:00.345) 0:00:11.584 ****** 2026-02-03 04:53:43.061337 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.061348 | orchestrator | 2026-02-03 04:53:43.061359 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-03 04:53:43.061370 | orchestrator | Tuesday 03 February 2026 04:53:39 +0000 (0:00:00.159) 0:00:11.744 ****** 2026-02-03 04:53:43.061381 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:53:43.061391 | orchestrator | 2026-02-03 04:53:43.061402 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-03 04:53:43.061413 | orchestrator | Tuesday 03 February 2026 04:53:39 +0000 (0:00:00.165) 0:00:11.909 ****** 2026-02-03 04:53:43.061424 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.061434 | orchestrator | 2026-02-03 04:53:43.061445 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-03 04:53:43.061456 | orchestrator | Tuesday 03 February 2026 04:53:39 +0000 (0:00:00.138) 0:00:12.048 ****** 2026-02-03 04:53:43.061473 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.061484 | orchestrator | 2026-02-03 04:53:43.061494 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-03 04:53:43.061505 | orchestrator | Tuesday 03 February 2026 04:53:39 +0000 (0:00:00.377) 0:00:12.426 ****** 2026-02-03 04:53:43.061516 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:53:43.061527 | orchestrator | 2026-02-03 04:53:43.061537 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-03 04:53:43.061548 | orchestrator | Tuesday 03 February 2026 04:53:40 +0000 (0:00:00.308) 0:00:12.734 ****** 2026-02-03 04:53:43.061568 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:53:43.061579 | orchestrator | 2026-02-03 04:53:43.061590 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-03 04:53:43.061601 | orchestrator | Tuesday 03 February 2026 04:53:40 +0000 (0:00:00.279) 0:00:13.014 ****** 2026-02-03 04:53:43.061612 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:53:43.061623 | orchestrator | 2026-02-03 04:53:43.061634 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-03 04:53:43.061644 | orchestrator | Tuesday 03 February 2026 04:53:42 +0000 (0:00:01.867) 0:00:14.882 ****** 2026-02-03 04:53:43.061655 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:53:43.061665 | orchestrator | 2026-02-03 04:53:43.061676 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-03 04:53:43.061687 | orchestrator | Tuesday 03 February 2026 04:53:42 +0000 (0:00:00.285) 0:00:15.167 ****** 2026-02-03 04:53:43.061697 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:53:43.061708 | orchestrator | 2026-02-03 04:53:43.061726 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:53:46.065406 | orchestrator | Tuesday 03 February 2026 04:53:42 +0000 (0:00:00.307) 0:00:15.475 ****** 2026-02-03 04:53:46.065503 | orchestrator | 2026-02-03 04:53:46.065513 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:53:46.065521 | orchestrator | Tuesday 03 February 2026 04:53:42 +0000 (0:00:00.088) 0:00:15.563 ****** 2026-02-03 04:53:46.065528 | orchestrator | 2026-02-03 04:53:46.065535 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:53:46.065542 | orchestrator | Tuesday 03 February 2026 04:53:42 +0000 (0:00:00.091) 0:00:15.654 ****** 2026-02-03 04:53:46.065548 | orchestrator | 2026-02-03 04:53:46.065555 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-03 04:53:46.065561 | orchestrator | Tuesday 03 February 2026 04:53:43 +0000 (0:00:00.083) 0:00:15.738 ****** 2026-02-03 04:53:46.065568 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:53:46.065574 | orchestrator | 2026-02-03 04:53:46.065581 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-03 04:53:46.065587 | orchestrator | Tuesday 03 February 2026 04:53:44 +0000 (0:00:01.657) 0:00:17.395 ****** 2026-02-03 04:53:46.065593 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-03 04:53:46.065600 | orchestrator |  "msg": [ 2026-02-03 04:53:46.065607 | orchestrator |  "Validator run completed.", 2026-02-03 04:53:46.065614 | orchestrator |  "You can find the report file here:", 2026-02-03 04:53:46.065620 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-03T04:53:28+00:00-report.json", 2026-02-03 04:53:46.065627 | orchestrator |  "on the following host:", 2026-02-03 04:53:46.065633 | orchestrator |  "testbed-manager" 2026-02-03 04:53:46.065640 | orchestrator |  ] 2026-02-03 04:53:46.065646 | orchestrator | } 2026-02-03 04:53:46.065653 | orchestrator | 2026-02-03 04:53:46.065659 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:53:46.065667 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-03 04:53:46.065675 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:53:46.065681 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:53:46.065688 | orchestrator | 2026-02-03 04:53:46.065694 | orchestrator | 2026-02-03 04:53:46.065700 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:53:46.065707 | orchestrator | Tuesday 03 February 2026 04:53:45 +0000 (0:00:00.969) 0:00:18.365 ****** 2026-02-03 04:53:46.065733 | orchestrator | =============================================================================== 2026-02-03 04:53:46.065740 | orchestrator | Aggregate test results step one ----------------------------------------- 1.87s 2026-02-03 04:53:46.065746 | orchestrator | Write report file ------------------------------------------------------- 1.66s 2026-02-03 04:53:46.065752 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.46s 2026-02-03 04:53:46.065758 | orchestrator | Gather status data ------------------------------------------------------ 1.39s 2026-02-03 04:53:46.065765 | orchestrator | Print report file information ------------------------------------------- 0.97s 2026-02-03 04:53:46.065771 | orchestrator | Get container info ------------------------------------------------------ 0.91s 2026-02-03 04:53:46.065777 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2026-02-03 04:53:46.065783 | orchestrator | Get timestamp for report file ------------------------------------------- 0.75s 2026-02-03 04:53:46.065789 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2026-02-03 04:53:46.065795 | orchestrator | Set test result to passed if container is existing ---------------------- 0.43s 2026-02-03 04:53:46.065813 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.42s 2026-02-03 04:53:46.065819 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.38s 2026-02-03 04:53:46.065825 | orchestrator | Set health test data ---------------------------------------------------- 0.35s 2026-02-03 04:53:46.065831 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2026-02-03 04:53:46.065837 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2026-02-03 04:53:46.065843 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.31s 2026-02-03 04:53:46.065850 | orchestrator | Aggregate test results step three --------------------------------------- 0.31s 2026-02-03 04:53:46.065856 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-02-03 04:53:46.065862 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-02-03 04:53:46.065868 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.28s 2026-02-03 04:53:46.572780 | orchestrator | + osism validate ceph-mgrs 2026-02-03 04:54:20.256983 | orchestrator | 2026-02-03 04:54:20.257130 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-03 04:54:20.257152 | orchestrator | 2026-02-03 04:54:20.257166 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-03 04:54:20.257178 | orchestrator | Tuesday 03 February 2026 04:54:04 +0000 (0:00:00.476) 0:00:00.476 ****** 2026-02-03 04:54:20.257190 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:20.257201 | orchestrator | 2026-02-03 04:54:20.257212 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-03 04:54:20.257224 | orchestrator | Tuesday 03 February 2026 04:54:05 +0000 (0:00:00.987) 0:00:01.463 ****** 2026-02-03 04:54:20.257235 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:20.257339 | orchestrator | 2026-02-03 04:54:20.257355 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-03 04:54:20.257366 | orchestrator | Tuesday 03 February 2026 04:54:06 +0000 (0:00:01.101) 0:00:02.565 ****** 2026-02-03 04:54:20.257378 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.257390 | orchestrator | 2026-02-03 04:54:20.257401 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-03 04:54:20.257412 | orchestrator | Tuesday 03 February 2026 04:54:06 +0000 (0:00:00.170) 0:00:02.736 ****** 2026-02-03 04:54:20.257423 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.257437 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:54:20.257457 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:54:20.257475 | orchestrator | 2026-02-03 04:54:20.257496 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-03 04:54:20.257516 | orchestrator | Tuesday 03 February 2026 04:54:06 +0000 (0:00:00.472) 0:00:03.209 ****** 2026-02-03 04:54:20.257568 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:54:20.257590 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:54:20.257611 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.257624 | orchestrator | 2026-02-03 04:54:20.257637 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-03 04:54:20.257649 | orchestrator | Tuesday 03 February 2026 04:54:07 +0000 (0:00:01.095) 0:00:04.304 ****** 2026-02-03 04:54:20.257662 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:54:20.257675 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:54:20.257689 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:54:20.257702 | orchestrator | 2026-02-03 04:54:20.257715 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-03 04:54:20.257729 | orchestrator | Tuesday 03 February 2026 04:54:08 +0000 (0:00:00.326) 0:00:04.630 ****** 2026-02-03 04:54:20.257743 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.257756 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:54:20.257769 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:54:20.257782 | orchestrator | 2026-02-03 04:54:20.257795 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-03 04:54:20.257815 | orchestrator | Tuesday 03 February 2026 04:54:08 +0000 (0:00:00.551) 0:00:05.182 ****** 2026-02-03 04:54:20.257833 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.257852 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:54:20.257870 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:54:20.257889 | orchestrator | 2026-02-03 04:54:20.257908 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-03 04:54:20.257928 | orchestrator | Tuesday 03 February 2026 04:54:09 +0000 (0:00:00.345) 0:00:05.527 ****** 2026-02-03 04:54:20.257947 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:54:20.257960 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:54:20.257972 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:54:20.257983 | orchestrator | 2026-02-03 04:54:20.257994 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-03 04:54:20.258005 | orchestrator | Tuesday 03 February 2026 04:54:09 +0000 (0:00:00.332) 0:00:05.860 ****** 2026-02-03 04:54:20.258097 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.258126 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:54:20.258146 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:54:20.258164 | orchestrator | 2026-02-03 04:54:20.258179 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-03 04:54:20.258191 | orchestrator | Tuesday 03 February 2026 04:54:10 +0000 (0:00:00.584) 0:00:06.444 ****** 2026-02-03 04:54:20.258201 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:54:20.258212 | orchestrator | 2026-02-03 04:54:20.258223 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-03 04:54:20.258234 | orchestrator | Tuesday 03 February 2026 04:54:10 +0000 (0:00:00.298) 0:00:06.743 ****** 2026-02-03 04:54:20.258272 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:54:20.258284 | orchestrator | 2026-02-03 04:54:20.258296 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-03 04:54:20.258306 | orchestrator | Tuesday 03 February 2026 04:54:10 +0000 (0:00:00.299) 0:00:07.042 ****** 2026-02-03 04:54:20.258317 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:54:20.258328 | orchestrator | 2026-02-03 04:54:20.258339 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:54:20.258350 | orchestrator | Tuesday 03 February 2026 04:54:10 +0000 (0:00:00.266) 0:00:07.309 ****** 2026-02-03 04:54:20.258361 | orchestrator | 2026-02-03 04:54:20.258372 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:54:20.258383 | orchestrator | Tuesday 03 February 2026 04:54:11 +0000 (0:00:00.085) 0:00:07.394 ****** 2026-02-03 04:54:20.258446 | orchestrator | 2026-02-03 04:54:20.258459 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:54:20.258472 | orchestrator | Tuesday 03 February 2026 04:54:11 +0000 (0:00:00.079) 0:00:07.474 ****** 2026-02-03 04:54:20.258509 | orchestrator | 2026-02-03 04:54:20.258529 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-03 04:54:20.258550 | orchestrator | Tuesday 03 February 2026 04:54:11 +0000 (0:00:00.080) 0:00:07.555 ****** 2026-02-03 04:54:20.258569 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:54:20.258588 | orchestrator | 2026-02-03 04:54:20.258607 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-03 04:54:20.258627 | orchestrator | Tuesday 03 February 2026 04:54:11 +0000 (0:00:00.266) 0:00:07.821 ****** 2026-02-03 04:54:20.258648 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:54:20.258666 | orchestrator | 2026-02-03 04:54:20.258702 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-03 04:54:20.258714 | orchestrator | Tuesday 03 February 2026 04:54:11 +0000 (0:00:00.279) 0:00:08.100 ****** 2026-02-03 04:54:20.258725 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.258736 | orchestrator | 2026-02-03 04:54:20.258747 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-03 04:54:20.258758 | orchestrator | Tuesday 03 February 2026 04:54:11 +0000 (0:00:00.139) 0:00:08.239 ****** 2026-02-03 04:54:20.258769 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:54:20.258780 | orchestrator | 2026-02-03 04:54:20.258791 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-03 04:54:20.258802 | orchestrator | Tuesday 03 February 2026 04:54:13 +0000 (0:00:01.985) 0:00:10.225 ****** 2026-02-03 04:54:20.258813 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.258823 | orchestrator | 2026-02-03 04:54:20.258851 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-03 04:54:20.258863 | orchestrator | Tuesday 03 February 2026 04:54:14 +0000 (0:00:00.479) 0:00:10.705 ****** 2026-02-03 04:54:20.258874 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.258885 | orchestrator | 2026-02-03 04:54:20.258896 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-03 04:54:20.258907 | orchestrator | Tuesday 03 February 2026 04:54:14 +0000 (0:00:00.403) 0:00:11.108 ****** 2026-02-03 04:54:20.258918 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:54:20.258929 | orchestrator | 2026-02-03 04:54:20.258939 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-03 04:54:20.258950 | orchestrator | Tuesday 03 February 2026 04:54:14 +0000 (0:00:00.204) 0:00:11.312 ****** 2026-02-03 04:54:20.258961 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:54:20.258972 | orchestrator | 2026-02-03 04:54:20.258982 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-03 04:54:20.258993 | orchestrator | Tuesday 03 February 2026 04:54:15 +0000 (0:00:00.173) 0:00:11.486 ****** 2026-02-03 04:54:20.259004 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:20.259015 | orchestrator | 2026-02-03 04:54:20.259026 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-03 04:54:20.259037 | orchestrator | Tuesday 03 February 2026 04:54:15 +0000 (0:00:00.285) 0:00:11.772 ****** 2026-02-03 04:54:20.259047 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:54:20.259058 | orchestrator | 2026-02-03 04:54:20.259069 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-03 04:54:20.259080 | orchestrator | Tuesday 03 February 2026 04:54:15 +0000 (0:00:00.287) 0:00:12.059 ****** 2026-02-03 04:54:20.259091 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:20.259102 | orchestrator | 2026-02-03 04:54:20.259113 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-03 04:54:20.259124 | orchestrator | Tuesday 03 February 2026 04:54:17 +0000 (0:00:01.502) 0:00:13.562 ****** 2026-02-03 04:54:20.259135 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:20.259146 | orchestrator | 2026-02-03 04:54:20.259156 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-03 04:54:20.259167 | orchestrator | Tuesday 03 February 2026 04:54:17 +0000 (0:00:00.306) 0:00:13.868 ****** 2026-02-03 04:54:20.259186 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:20.259197 | orchestrator | 2026-02-03 04:54:20.259208 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:54:20.259219 | orchestrator | Tuesday 03 February 2026 04:54:17 +0000 (0:00:00.285) 0:00:14.154 ****** 2026-02-03 04:54:20.259230 | orchestrator | 2026-02-03 04:54:20.259240 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:54:20.259281 | orchestrator | Tuesday 03 February 2026 04:54:17 +0000 (0:00:00.075) 0:00:14.229 ****** 2026-02-03 04:54:20.259294 | orchestrator | 2026-02-03 04:54:20.259313 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:54:20.259332 | orchestrator | Tuesday 03 February 2026 04:54:17 +0000 (0:00:00.073) 0:00:14.302 ****** 2026-02-03 04:54:20.259350 | orchestrator | 2026-02-03 04:54:20.259368 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-03 04:54:20.259386 | orchestrator | Tuesday 03 February 2026 04:54:18 +0000 (0:00:00.302) 0:00:14.605 ****** 2026-02-03 04:54:20.259404 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:20.259422 | orchestrator | 2026-02-03 04:54:20.259439 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-03 04:54:20.259457 | orchestrator | Tuesday 03 February 2026 04:54:19 +0000 (0:00:01.467) 0:00:16.073 ****** 2026-02-03 04:54:20.259475 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-03 04:54:20.259495 | orchestrator |  "msg": [ 2026-02-03 04:54:20.259514 | orchestrator |  "Validator run completed.", 2026-02-03 04:54:20.259541 | orchestrator |  "You can find the report file here:", 2026-02-03 04:54:20.259553 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-03T04:54:04+00:00-report.json", 2026-02-03 04:54:20.259565 | orchestrator |  "on the following host:", 2026-02-03 04:54:20.259576 | orchestrator |  "testbed-manager" 2026-02-03 04:54:20.259587 | orchestrator |  ] 2026-02-03 04:54:20.259598 | orchestrator | } 2026-02-03 04:54:20.259610 | orchestrator | 2026-02-03 04:54:20.259620 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:54:20.259632 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-03 04:54:20.259645 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:54:20.259667 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:54:20.645049 | orchestrator | 2026-02-03 04:54:20.645152 | orchestrator | 2026-02-03 04:54:20.645168 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:54:20.645181 | orchestrator | Tuesday 03 February 2026 04:54:20 +0000 (0:00:00.470) 0:00:16.543 ****** 2026-02-03 04:54:20.645192 | orchestrator | =============================================================================== 2026-02-03 04:54:20.645203 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.99s 2026-02-03 04:54:20.645214 | orchestrator | Aggregate test results step one ----------------------------------------- 1.50s 2026-02-03 04:54:20.645224 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2026-02-03 04:54:20.645235 | orchestrator | Create report output directory ------------------------------------------ 1.10s 2026-02-03 04:54:20.645295 | orchestrator | Get container info ------------------------------------------------------ 1.10s 2026-02-03 04:54:20.645307 | orchestrator | Get timestamp for report file ------------------------------------------- 0.99s 2026-02-03 04:54:20.645319 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.58s 2026-02-03 04:54:20.645330 | orchestrator | Set test result to passed if container is existing ---------------------- 0.55s 2026-02-03 04:54:20.645368 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.48s 2026-02-03 04:54:20.645380 | orchestrator | Prepare test data for container existance test -------------------------- 0.47s 2026-02-03 04:54:20.645391 | orchestrator | Print report file information ------------------------------------------- 0.47s 2026-02-03 04:54:20.645402 | orchestrator | Flush handlers ---------------------------------------------------------- 0.45s 2026-02-03 04:54:20.645413 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.40s 2026-02-03 04:54:20.645424 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2026-02-03 04:54:20.645434 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2026-02-03 04:54:20.645445 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-02-03 04:54:20.645456 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2026-02-03 04:54:20.645467 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-02-03 04:54:20.645478 | orchestrator | Aggregate test results step one ----------------------------------------- 0.30s 2026-02-03 04:54:20.645488 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.29s 2026-02-03 04:54:21.001887 | orchestrator | + osism validate ceph-osds 2026-02-03 04:54:43.437249 | orchestrator | 2026-02-03 04:54:43.437426 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-03 04:54:43.437460 | orchestrator | 2026-02-03 04:54:43.437481 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-03 04:54:43.437493 | orchestrator | Tuesday 03 February 2026 04:54:38 +0000 (0:00:00.452) 0:00:00.452 ****** 2026-02-03 04:54:43.437505 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:43.437517 | orchestrator | 2026-02-03 04:54:43.437528 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-03 04:54:43.437539 | orchestrator | Tuesday 03 February 2026 04:54:39 +0000 (0:00:00.923) 0:00:01.375 ****** 2026-02-03 04:54:43.437550 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:43.437561 | orchestrator | 2026-02-03 04:54:43.437572 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-03 04:54:43.437583 | orchestrator | Tuesday 03 February 2026 04:54:39 +0000 (0:00:00.535) 0:00:01.911 ****** 2026-02-03 04:54:43.437594 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 04:54:43.437604 | orchestrator | 2026-02-03 04:54:43.437615 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-03 04:54:43.437626 | orchestrator | Tuesday 03 February 2026 04:54:40 +0000 (0:00:00.827) 0:00:02.738 ****** 2026-02-03 04:54:43.437637 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:43.437650 | orchestrator | 2026-02-03 04:54:43.437661 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-03 04:54:43.437672 | orchestrator | Tuesday 03 February 2026 04:54:40 +0000 (0:00:00.144) 0:00:02.883 ****** 2026-02-03 04:54:43.437683 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:43.437694 | orchestrator | 2026-02-03 04:54:43.437707 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-03 04:54:43.437726 | orchestrator | Tuesday 03 February 2026 04:54:41 +0000 (0:00:00.147) 0:00:03.030 ****** 2026-02-03 04:54:43.437744 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:43.437762 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:54:43.437784 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:54:43.437803 | orchestrator | 2026-02-03 04:54:43.437847 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-03 04:54:43.437863 | orchestrator | Tuesday 03 February 2026 04:54:41 +0000 (0:00:00.333) 0:00:03.363 ****** 2026-02-03 04:54:43.437876 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:43.437888 | orchestrator | 2026-02-03 04:54:43.437901 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-03 04:54:43.437938 | orchestrator | Tuesday 03 February 2026 04:54:41 +0000 (0:00:00.181) 0:00:03.545 ****** 2026-02-03 04:54:43.437952 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:43.437965 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:54:43.437977 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:54:43.437990 | orchestrator | 2026-02-03 04:54:43.438004 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-03 04:54:43.438075 | orchestrator | Tuesday 03 February 2026 04:54:41 +0000 (0:00:00.360) 0:00:03.905 ****** 2026-02-03 04:54:43.438099 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:43.438118 | orchestrator | 2026-02-03 04:54:43.438137 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-03 04:54:43.438156 | orchestrator | Tuesday 03 February 2026 04:54:42 +0000 (0:00:00.840) 0:00:04.745 ****** 2026-02-03 04:54:43.438175 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:43.438194 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:54:43.438214 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:54:43.438233 | orchestrator | 2026-02-03 04:54:43.438250 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-03 04:54:43.438262 | orchestrator | Tuesday 03 February 2026 04:54:43 +0000 (0:00:00.318) 0:00:05.064 ****** 2026-02-03 04:54:43.438303 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6a766f3c0af58db2a60b0f53e16ea37b6e16c0be73b643a27eea055cabdaf503', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-03 04:54:43.438319 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2cdeeef76c4114a5190bc990d6f293c9f55c156fadfb9c0f6a54b38bb2811819', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-03 04:54:43.438332 | orchestrator | skipping: [testbed-node-3] => (item={'id': '04094f24d810234c0b5e35f674ad4fb3ed4035c8b48c8f548498a744e3f48e54', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-03 04:54:43.438343 | orchestrator | skipping: [testbed-node-3] => (item={'id': '93f27bf2098746b91e47e1d074f44a7778509e98582159fc274bacdae0b8db08', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-03 04:54:43.438354 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b0937904b2ecafac9ad54d36796e34ee9dba6b53ca577b65d833d8497e490b6a', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-03 04:54:43.438393 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4122dabe303927fc1f70b76c200f55b6f1111bb5e4173744a439f718f7583c10', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-03 04:54:43.438406 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b6281e14bbc6def206bf1a793b1cd6073eb08fc0e2919b1adf277ee83cb4f406', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-03 04:54:43.438417 | orchestrator | skipping: [testbed-node-3] => (item={'id': '17365653fe20996d650dc625dc80cea23a8f371281e3aa4e868c29a1f593bc49', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-02-03 04:54:43.438428 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b5113ffe58ba5473de79d66ef7bc9ab9572793e07fcf6a2ce5f08d3bdd11f491', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.438452 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b20559963814eb2217a690ae9653eea2243e638ff2f75d263a2b0ba247e571ce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.438469 | orchestrator | skipping: [testbed-node-3] => (item={'id': '61d248648a05aea6d496e46c21eac67e65b36692463d3793eb7ca11cf77684df', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.438489 | orchestrator | ok: [testbed-node-3] => (item={'id': '82d559d08288f51caec78ed9478ee790fa00be175e53fc8b9eedfc0c06fcadc3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-03 04:54:43.438508 | orchestrator | ok: [testbed-node-3] => (item={'id': '22e88878c0680d2df60b961d95eb30926589bdf9b53a79b15c8bc698f4f81708', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-03 04:54:43.438527 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7b2a6dfe55d45a3f12453c6e1fe1666173bf9c325761d45671014378caf4ce30', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.438546 | orchestrator | skipping: [testbed-node-3] => (item={'id': '674c3f858508c489c9fd3d21f0201d79500752716647709bcce9c3693f723e47', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-03 04:54:43.438565 | orchestrator | skipping: [testbed-node-3] => (item={'id': '35712c4c2d6dd63d5f03d31408fe785bf1d235302ccd544db0080b91f0991d27', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-03 04:54:43.438586 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1d04093d77ebb369e6a93f9dc13fd7120b9ab4213a80e1be0d3186f7b473c7af', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-03 04:54:43.438605 | orchestrator | skipping: [testbed-node-3] => (item={'id': '812173ff5fda0f395976b03db9738b33ff6427374317e2b716998f88c4260b70', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-03 04:54:43.438617 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0fcad031fecd298716ea27943234628d2f6615d471f1ad46506bf87932bd7d8e', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-03 04:54:43.438629 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4f5405f42a471d46acb17579ab3af1344e396def4962105d86b79d99113dd7b6', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-03 04:54:43.438650 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dcaa371448b110adb6c7c79fbeb31a83d0bb7af8e89ebf39dfa1c997f781f8c1', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-03 04:54:43.701937 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e17625170f8feeb798efaaccd7ad4ef3800ed7dd7dd4ed4f6a64b6820bb1ca1e', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-03 04:54:43.702124 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0d16bd165f4cc52b90884a13e4c7dc53c6f0b524bf8e527a1e06fb478c1122a4', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-03 04:54:43.702161 | orchestrator | skipping: [testbed-node-4] => (item={'id': '301ef8ef1a4b07f3596a43414588a13961f57af1dcca3f74d6c50d2a914ab0f6', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-03 04:54:43.702173 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bb9cb31900932ead12d8f9e10ad6e6fd340f23424ebb58ffeb2fc6d25fc71c1e', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-03 04:54:43.702187 | orchestrator | skipping: [testbed-node-4] => (item={'id': '43eab9ba69627111a2fbfdc303bd61db7fced08757eeadd61a049a79cfaca984', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-03 04:54:43.702197 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0142d35c30b1fc40df0afb1a4a087334383f1e31c41ed57269333f86b3b8bd3e', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-02-03 04:54:43.702206 | orchestrator | skipping: [testbed-node-4] => (item={'id': '69e575babfe82375a4e9a7977b468e8e84df5172452f17da8361f44385aa56a0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.702216 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2426c154589c155122402864389e97a2bd90275316825dd9d028b263aa3f152d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.702226 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a26edd49d031147f2b8430cc524ac0412476d49950a9cd41b3de8c08b3eac517', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.702237 | orchestrator | ok: [testbed-node-4] => (item={'id': 'a4d384cbabc80b4b00bcec33027c5a6c2a3171bf19f9067a7d1e9c2a848473b7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-03 04:54:43.702246 | orchestrator | ok: [testbed-node-4] => (item={'id': '16deda325b414af8211b571a43e71727693706b1faf75804fcaea25390dcbecb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-03 04:54:43.702255 | orchestrator | skipping: [testbed-node-4] => (item={'id': '362e106ce7a52216fc9939365ed3194c5203c9903b0c1efbbbebf496f58143c7', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.702264 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4ab75191f66cdc5fcbb1f1f0bd4721181a891a23ef2fc2aea2288cdf4d64417c', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-03 04:54:43.702334 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6fb1746563d14d155eae628f5e32036fcbecc6aac69ec6f322eb1b909221e771', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-03 04:54:43.702360 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bb5956b89b49f3af1636e067da224cee56632c64c17deb6f3fd31e5c01fd8c14', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-03 04:54:43.702377 | orchestrator | skipping: [testbed-node-4] => (item={'id': '61dcf239a5d1ca5f4a6e184027839cf8e17df1b8727bc104e49a71f085823cc0', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-03 04:54:43.702386 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3110eead9cb5966552c44cd51647757e4648208d34ee7b5a9f02df5e02f893e8', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-03 04:54:43.702395 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e30a06f0179c3fd73b96eac3220c1f87e62e3ee9065035c78fea079a5ef56839', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-03 04:54:43.702404 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bad3421de06b988192481b125142f36d34617a5722c453dddf22b42c4fe2f241', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-03 04:54:43.702417 | orchestrator | skipping: [testbed-node-5] => (item={'id': '74531cbedd4bf47deaf21fa39e14a87bc8216fcad5d5e5973e5ae6578653f5f4', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-03 04:54:43.702426 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4877e3ea31dbfb1fef200f549cedebc67c83fff1557c5d99f139a8b373f15314', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-03 04:54:43.702435 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bb9bd18a2a10dcb5028ece3c93f04f1620619499aefb0bd357982c73f9b8addb', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-03 04:54:43.702444 | orchestrator | skipping: [testbed-node-5] => (item={'id': '20778b9c1483a165e46903dc958e636a9a52b49d4b58809e9f61fae850627f6e', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-03 04:54:43.702453 | orchestrator | skipping: [testbed-node-5] => (item={'id': '94afa9aa41b73f1964fdb3d9538c9e8aea87ea9cf71b677d5bd2f9b905582651', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-02-03 04:54:43.702462 | orchestrator | skipping: [testbed-node-5] => (item={'id': '57906678e15d51cb8ce19e92da56c0a094029afbc629c2c680a1fbb612b7b6ed', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-02-03 04:54:43.702471 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2892f961d1bb850998337d4817855e8cefc6711ceb4fa46fd42a7b5f3a4054d5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.702480 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a7561e3ce96123f7458e26caef0d1062f2c070f0edff2105a8f900ef6dea8234', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.702492 | orchestrator | skipping: [testbed-node-5] => (item={'id': '74193cfa68aab96a939617016cb6b95b6da9949841a21817b0ac85c37ce67ebe', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:43.702509 | orchestrator | ok: [testbed-node-5] => (item={'id': 'bcea4ff91c7323f64ae5ef79c0b7c997679a7459940e4ddd496adda35468560e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-03 04:54:43.702526 | orchestrator | ok: [testbed-node-5] => (item={'id': '054a495b98868e2001f43c308dca6850adf63d140129045c3d8e80b3cc4f36f4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-03 04:54:55.856950 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dbf1d02f70574aa1da2863000509685a106e7d22b51cb1bb7dd3c5a7faa2823a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-03 04:54:55.857067 | orchestrator | skipping: [testbed-node-5] => (item={'id': '45e098f11beb56ca481ee09f23850935cf852a9e97fe52d74e4388452444a272', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-03 04:54:55.857085 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6bf2808b21347ad345846033ecbe5aeb7c00081e62dd253c43d58b4f4c6c8ee2', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-03 04:54:55.857099 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ba69ce5877fc3ee5f5467435f3af43787309ff93eef89615274b4c2e31b77e08', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-03 04:54:55.857128 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ada65dc0bddf7ae6dab57d935146ba70df20f2857ac336bc8741dc9ceb48d63d', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-03 04:54:55.857141 | orchestrator | skipping: [testbed-node-5] => (item={'id': '460fe4a5c179bd535bbdaee5019afc0a9735c0fe2dba7ccb7b1f3c699ede6ad9', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-03 04:54:55.857153 | orchestrator | 2026-02-03 04:54:55.857166 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-03 04:54:55.857178 | orchestrator | Tuesday 03 February 2026 04:54:43 +0000 (0:00:00.561) 0:00:05.626 ****** 2026-02-03 04:54:55.857189 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.857202 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:54:55.857212 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:54:55.857223 | orchestrator | 2026-02-03 04:54:55.857234 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-03 04:54:55.857246 | orchestrator | Tuesday 03 February 2026 04:54:44 +0000 (0:00:00.361) 0:00:05.988 ****** 2026-02-03 04:54:55.857257 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.857268 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:54:55.857374 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:54:55.857391 | orchestrator | 2026-02-03 04:54:55.857402 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-03 04:54:55.857413 | orchestrator | Tuesday 03 February 2026 04:54:44 +0000 (0:00:00.558) 0:00:06.547 ****** 2026-02-03 04:54:55.857424 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.857436 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:54:55.857447 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:54:55.857458 | orchestrator | 2026-02-03 04:54:55.857471 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-03 04:54:55.857484 | orchestrator | Tuesday 03 February 2026 04:54:44 +0000 (0:00:00.303) 0:00:06.851 ****** 2026-02-03 04:54:55.857498 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.857510 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:54:55.857522 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:54:55.857557 | orchestrator | 2026-02-03 04:54:55.857570 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-03 04:54:55.857583 | orchestrator | Tuesday 03 February 2026 04:54:45 +0000 (0:00:00.350) 0:00:07.202 ****** 2026-02-03 04:54:55.857595 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-03 04:54:55.857609 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-03 04:54:55.857621 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.857634 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-03 04:54:55.857646 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-03 04:54:55.857658 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:54:55.857671 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-03 04:54:55.857684 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-03 04:54:55.857696 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:54:55.857709 | orchestrator | 2026-02-03 04:54:55.857722 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-03 04:54:55.857735 | orchestrator | Tuesday 03 February 2026 04:54:45 +0000 (0:00:00.337) 0:00:07.539 ****** 2026-02-03 04:54:55.857747 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.857760 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:54:55.857772 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:54:55.857783 | orchestrator | 2026-02-03 04:54:55.857796 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-03 04:54:55.857809 | orchestrator | Tuesday 03 February 2026 04:54:46 +0000 (0:00:00.554) 0:00:08.094 ****** 2026-02-03 04:54:55.857821 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.857852 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:54:55.857864 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:54:55.857875 | orchestrator | 2026-02-03 04:54:55.857886 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-03 04:54:55.857897 | orchestrator | Tuesday 03 February 2026 04:54:46 +0000 (0:00:00.324) 0:00:08.418 ****** 2026-02-03 04:54:55.857908 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.857920 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:54:55.857931 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:54:55.857941 | orchestrator | 2026-02-03 04:54:55.857952 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-03 04:54:55.857963 | orchestrator | Tuesday 03 February 2026 04:54:46 +0000 (0:00:00.327) 0:00:08.745 ****** 2026-02-03 04:54:55.857974 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.857985 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:54:55.857996 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:54:55.858007 | orchestrator | 2026-02-03 04:54:55.858080 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-03 04:54:55.858093 | orchestrator | Tuesday 03 February 2026 04:54:47 +0000 (0:00:00.377) 0:00:09.122 ****** 2026-02-03 04:54:55.858104 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.858115 | orchestrator | 2026-02-03 04:54:55.858126 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-03 04:54:55.858137 | orchestrator | Tuesday 03 February 2026 04:54:47 +0000 (0:00:00.734) 0:00:09.857 ****** 2026-02-03 04:54:55.858148 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.858159 | orchestrator | 2026-02-03 04:54:55.858170 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-03 04:54:55.858181 | orchestrator | Tuesday 03 February 2026 04:54:48 +0000 (0:00:00.260) 0:00:10.117 ****** 2026-02-03 04:54:55.858192 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.858203 | orchestrator | 2026-02-03 04:54:55.858214 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:54:55.858234 | orchestrator | Tuesday 03 February 2026 04:54:48 +0000 (0:00:00.276) 0:00:10.393 ****** 2026-02-03 04:54:55.858245 | orchestrator | 2026-02-03 04:54:55.858256 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:54:55.858267 | orchestrator | Tuesday 03 February 2026 04:54:48 +0000 (0:00:00.093) 0:00:10.487 ****** 2026-02-03 04:54:55.858278 | orchestrator | 2026-02-03 04:54:55.858310 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:54:55.858321 | orchestrator | Tuesday 03 February 2026 04:54:48 +0000 (0:00:00.088) 0:00:10.576 ****** 2026-02-03 04:54:55.858332 | orchestrator | 2026-02-03 04:54:55.858343 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-03 04:54:55.858354 | orchestrator | Tuesday 03 February 2026 04:54:48 +0000 (0:00:00.074) 0:00:10.651 ****** 2026-02-03 04:54:55.858365 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.858377 | orchestrator | 2026-02-03 04:54:55.858388 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-03 04:54:55.858399 | orchestrator | Tuesday 03 February 2026 04:54:48 +0000 (0:00:00.273) 0:00:10.925 ****** 2026-02-03 04:54:55.858410 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.858421 | orchestrator | 2026-02-03 04:54:55.858432 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-03 04:54:55.858442 | orchestrator | Tuesday 03 February 2026 04:54:49 +0000 (0:00:00.312) 0:00:11.237 ****** 2026-02-03 04:54:55.858453 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.858465 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:54:55.858476 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:54:55.858487 | orchestrator | 2026-02-03 04:54:55.858498 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-03 04:54:55.858509 | orchestrator | Tuesday 03 February 2026 04:54:49 +0000 (0:00:00.324) 0:00:11.562 ****** 2026-02-03 04:54:55.858520 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.858531 | orchestrator | 2026-02-03 04:54:55.858541 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-03 04:54:55.858552 | orchestrator | Tuesday 03 February 2026 04:54:50 +0000 (0:00:00.758) 0:00:12.321 ****** 2026-02-03 04:54:55.858563 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 04:54:55.858575 | orchestrator | 2026-02-03 04:54:55.858586 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-03 04:54:55.858597 | orchestrator | Tuesday 03 February 2026 04:54:52 +0000 (0:00:01.660) 0:00:13.982 ****** 2026-02-03 04:54:55.858608 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.858619 | orchestrator | 2026-02-03 04:54:55.858630 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-03 04:54:55.858641 | orchestrator | Tuesday 03 February 2026 04:54:52 +0000 (0:00:00.135) 0:00:14.118 ****** 2026-02-03 04:54:55.858652 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.858663 | orchestrator | 2026-02-03 04:54:55.858674 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-03 04:54:55.858685 | orchestrator | Tuesday 03 February 2026 04:54:52 +0000 (0:00:00.337) 0:00:14.455 ****** 2026-02-03 04:54:55.858696 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:54:55.858707 | orchestrator | 2026-02-03 04:54:55.858718 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-03 04:54:55.858729 | orchestrator | Tuesday 03 February 2026 04:54:52 +0000 (0:00:00.136) 0:00:14.592 ****** 2026-02-03 04:54:55.858740 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.858751 | orchestrator | 2026-02-03 04:54:55.858762 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-03 04:54:55.858773 | orchestrator | Tuesday 03 February 2026 04:54:52 +0000 (0:00:00.141) 0:00:14.733 ****** 2026-02-03 04:54:55.858784 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:54:55.858795 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:54:55.858806 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:54:55.858826 | orchestrator | 2026-02-03 04:54:55.858837 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-03 04:54:55.858848 | orchestrator | Tuesday 03 February 2026 04:54:53 +0000 (0:00:00.335) 0:00:15.069 ****** 2026-02-03 04:54:55.858859 | orchestrator | changed: [testbed-node-3] 2026-02-03 04:54:55.858870 | orchestrator | changed: [testbed-node-4] 2026-02-03 04:54:55.858882 | orchestrator | changed: [testbed-node-5] 2026-02-03 04:55:07.169727 | orchestrator | 2026-02-03 04:55:07.169843 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-03 04:55:07.169863 | orchestrator | Tuesday 03 February 2026 04:54:55 +0000 (0:00:02.714) 0:00:17.784 ****** 2026-02-03 04:55:07.169876 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:55:07.169888 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:55:07.169899 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:55:07.169910 | orchestrator | 2026-02-03 04:55:07.169922 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-03 04:55:07.169933 | orchestrator | Tuesday 03 February 2026 04:54:56 +0000 (0:00:00.339) 0:00:18.123 ****** 2026-02-03 04:55:07.169945 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:55:07.169956 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:55:07.169967 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:55:07.169979 | orchestrator | 2026-02-03 04:55:07.169990 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-03 04:55:07.170001 | orchestrator | Tuesday 03 February 2026 04:54:56 +0000 (0:00:00.537) 0:00:18.661 ****** 2026-02-03 04:55:07.170013 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:55:07.170081 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:55:07.170093 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:55:07.170104 | orchestrator | 2026-02-03 04:55:07.170115 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-03 04:55:07.170127 | orchestrator | Tuesday 03 February 2026 04:54:57 +0000 (0:00:00.370) 0:00:19.031 ****** 2026-02-03 04:55:07.170144 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:55:07.170164 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:55:07.170185 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:55:07.170203 | orchestrator | 2026-02-03 04:55:07.170214 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-03 04:55:07.170230 | orchestrator | Tuesday 03 February 2026 04:54:57 +0000 (0:00:00.605) 0:00:19.637 ****** 2026-02-03 04:55:07.170242 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:55:07.170253 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:55:07.170264 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:55:07.170275 | orchestrator | 2026-02-03 04:55:07.170290 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-03 04:55:07.170329 | orchestrator | Tuesday 03 February 2026 04:54:58 +0000 (0:00:00.334) 0:00:19.972 ****** 2026-02-03 04:55:07.170343 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:55:07.170357 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:55:07.170370 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:55:07.170383 | orchestrator | 2026-02-03 04:55:07.170396 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-03 04:55:07.170410 | orchestrator | Tuesday 03 February 2026 04:54:58 +0000 (0:00:00.343) 0:00:20.316 ****** 2026-02-03 04:55:07.170422 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:55:07.170436 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:55:07.170449 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:55:07.170462 | orchestrator | 2026-02-03 04:55:07.170476 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-03 04:55:07.170491 | orchestrator | Tuesday 03 February 2026 04:54:58 +0000 (0:00:00.509) 0:00:20.825 ****** 2026-02-03 04:55:07.170511 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:55:07.170530 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:55:07.170550 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:55:07.170567 | orchestrator | 2026-02-03 04:55:07.170587 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-03 04:55:07.170631 | orchestrator | Tuesday 03 February 2026 04:54:59 +0000 (0:00:00.985) 0:00:21.810 ****** 2026-02-03 04:55:07.170651 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:55:07.170671 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:55:07.170689 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:55:07.170709 | orchestrator | 2026-02-03 04:55:07.170729 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-03 04:55:07.170748 | orchestrator | Tuesday 03 February 2026 04:55:00 +0000 (0:00:00.361) 0:00:22.172 ****** 2026-02-03 04:55:07.170761 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:55:07.170772 | orchestrator | skipping: [testbed-node-4] 2026-02-03 04:55:07.170783 | orchestrator | skipping: [testbed-node-5] 2026-02-03 04:55:07.170794 | orchestrator | 2026-02-03 04:55:07.170805 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-03 04:55:07.170816 | orchestrator | Tuesday 03 February 2026 04:55:00 +0000 (0:00:00.352) 0:00:22.525 ****** 2026-02-03 04:55:07.170827 | orchestrator | ok: [testbed-node-3] 2026-02-03 04:55:07.170837 | orchestrator | ok: [testbed-node-4] 2026-02-03 04:55:07.170848 | orchestrator | ok: [testbed-node-5] 2026-02-03 04:55:07.170859 | orchestrator | 2026-02-03 04:55:07.170869 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-03 04:55:07.170880 | orchestrator | Tuesday 03 February 2026 04:55:01 +0000 (0:00:00.577) 0:00:23.103 ****** 2026-02-03 04:55:07.170891 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 04:55:07.170902 | orchestrator | 2026-02-03 04:55:07.170913 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-03 04:55:07.170924 | orchestrator | Tuesday 03 February 2026 04:55:01 +0000 (0:00:00.361) 0:00:23.464 ****** 2026-02-03 04:55:07.170935 | orchestrator | skipping: [testbed-node-3] 2026-02-03 04:55:07.170945 | orchestrator | 2026-02-03 04:55:07.170956 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-03 04:55:07.170967 | orchestrator | Tuesday 03 February 2026 04:55:01 +0000 (0:00:00.290) 0:00:23.754 ****** 2026-02-03 04:55:07.170978 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 04:55:07.170988 | orchestrator | 2026-02-03 04:55:07.170999 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-03 04:55:07.171010 | orchestrator | Tuesday 03 February 2026 04:55:03 +0000 (0:00:01.841) 0:00:25.596 ****** 2026-02-03 04:55:07.171021 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 04:55:07.171032 | orchestrator | 2026-02-03 04:55:07.171043 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-03 04:55:07.171053 | orchestrator | Tuesday 03 February 2026 04:55:03 +0000 (0:00:00.280) 0:00:25.876 ****** 2026-02-03 04:55:07.171064 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 04:55:07.171075 | orchestrator | 2026-02-03 04:55:07.171105 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:55:07.171116 | orchestrator | Tuesday 03 February 2026 04:55:04 +0000 (0:00:00.278) 0:00:26.154 ****** 2026-02-03 04:55:07.171127 | orchestrator | 2026-02-03 04:55:07.171138 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:55:07.171149 | orchestrator | Tuesday 03 February 2026 04:55:04 +0000 (0:00:00.087) 0:00:26.241 ****** 2026-02-03 04:55:07.171160 | orchestrator | 2026-02-03 04:55:07.171170 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-03 04:55:07.171181 | orchestrator | Tuesday 03 February 2026 04:55:04 +0000 (0:00:00.087) 0:00:26.329 ****** 2026-02-03 04:55:07.171192 | orchestrator | 2026-02-03 04:55:07.171203 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-03 04:55:07.171215 | orchestrator | Tuesday 03 February 2026 04:55:04 +0000 (0:00:00.081) 0:00:26.410 ****** 2026-02-03 04:55:07.171234 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-03 04:55:07.171252 | orchestrator | 2026-02-03 04:55:07.171270 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-03 04:55:07.171358 | orchestrator | Tuesday 03 February 2026 04:55:06 +0000 (0:00:01.670) 0:00:28.081 ****** 2026-02-03 04:55:07.171378 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-03 04:55:07.171395 | orchestrator |  "msg": [ 2026-02-03 04:55:07.171413 | orchestrator |  "Validator run completed.", 2026-02-03 04:55:07.171432 | orchestrator |  "You can find the report file here:", 2026-02-03 04:55:07.171451 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-03T04:54:39+00:00-report.json", 2026-02-03 04:55:07.171480 | orchestrator |  "on the following host:", 2026-02-03 04:55:07.171499 | orchestrator |  "testbed-manager" 2026-02-03 04:55:07.171517 | orchestrator |  ] 2026-02-03 04:55:07.171536 | orchestrator | } 2026-02-03 04:55:07.171557 | orchestrator | 2026-02-03 04:55:07.171576 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:55:07.171595 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 04:55:07.171613 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-03 04:55:07.171625 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-03 04:55:07.171636 | orchestrator | 2026-02-03 04:55:07.171647 | orchestrator | 2026-02-03 04:55:07.171658 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:55:07.171669 | orchestrator | Tuesday 03 February 2026 04:55:06 +0000 (0:00:00.624) 0:00:28.706 ****** 2026-02-03 04:55:07.171680 | orchestrator | =============================================================================== 2026-02-03 04:55:07.171691 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.72s 2026-02-03 04:55:07.171701 | orchestrator | Aggregate test results step one ----------------------------------------- 1.84s 2026-02-03 04:55:07.171712 | orchestrator | Write report file ------------------------------------------------------- 1.67s 2026-02-03 04:55:07.171723 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.66s 2026-02-03 04:55:07.171734 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.99s 2026-02-03 04:55:07.171745 | orchestrator | Get timestamp for report file ------------------------------------------- 0.92s 2026-02-03 04:55:07.171755 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.84s 2026-02-03 04:55:07.171773 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2026-02-03 04:55:07.171791 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.76s 2026-02-03 04:55:07.171810 | orchestrator | Aggregate test results step one ----------------------------------------- 0.73s 2026-02-03 04:55:07.171829 | orchestrator | Print report file information ------------------------------------------- 0.62s 2026-02-03 04:55:07.171847 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.61s 2026-02-03 04:55:07.171866 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.58s 2026-02-03 04:55:07.171885 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.56s 2026-02-03 04:55:07.171903 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.56s 2026-02-03 04:55:07.171923 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.56s 2026-02-03 04:55:07.171943 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.54s 2026-02-03 04:55:07.171961 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.54s 2026-02-03 04:55:07.171975 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2026-02-03 04:55:07.171986 | orchestrator | Set test result to passed if all containers are running ----------------- 0.38s 2026-02-03 04:55:07.551696 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-03 04:55:07.558807 | orchestrator | + set -e 2026-02-03 04:55:07.558899 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 04:55:07.559532 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 04:55:07.559564 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 04:55:07.559577 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 04:55:07.560759 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 04:55:07.560798 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 04:55:07.560818 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 04:55:07.560838 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 04:55:07.560857 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 04:55:07.560874 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 04:55:07.560892 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 04:55:07.560909 | orchestrator | ++ export ARA=false 2026-02-03 04:55:07.560927 | orchestrator | ++ ARA=false 2026-02-03 04:55:07.560945 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 04:55:07.560963 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 04:55:07.560982 | orchestrator | ++ export TEMPEST=false 2026-02-03 04:55:07.561001 | orchestrator | ++ TEMPEST=false 2026-02-03 04:55:07.561020 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 04:55:07.561039 | orchestrator | ++ IS_ZUUL=true 2026-02-03 04:55:07.561058 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 04:55:07.561079 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 04:55:07.561091 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 04:55:07.561102 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 04:55:07.561112 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 04:55:07.561124 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 04:55:07.561135 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 04:55:07.561145 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 04:55:07.561156 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 04:55:07.561167 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 04:55:07.561178 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-03 04:55:07.561189 | orchestrator | + source /etc/os-release 2026-02-03 04:55:07.561200 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-02-03 04:55:07.561211 | orchestrator | ++ NAME=Ubuntu 2026-02-03 04:55:07.561222 | orchestrator | ++ VERSION_ID=24.04 2026-02-03 04:55:07.561233 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-02-03 04:55:07.561244 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-03 04:55:07.561255 | orchestrator | ++ ID=ubuntu 2026-02-03 04:55:07.561266 | orchestrator | ++ ID_LIKE=debian 2026-02-03 04:55:07.561277 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-03 04:55:07.561288 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-03 04:55:07.561366 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-03 04:55:07.561381 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-03 04:55:07.561395 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-03 04:55:07.561408 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-03 04:55:07.561421 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-03 04:55:07.561434 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-03 04:55:07.561448 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-03 04:55:07.586078 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-03 04:55:32.633548 | orchestrator | 2026-02-03 04:55:32.633678 | orchestrator | # Status of Elasticsearch 2026-02-03 04:55:32.633703 | orchestrator | 2026-02-03 04:55:32.633721 | orchestrator | + pushd /opt/configuration/contrib 2026-02-03 04:55:32.633739 | orchestrator | + echo 2026-02-03 04:55:32.633755 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-03 04:55:32.633770 | orchestrator | + echo 2026-02-03 04:55:32.633785 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-03 04:55:32.846379 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-03 04:55:32.846502 | orchestrator | 2026-02-03 04:55:32.846527 | orchestrator | # Status of MariaDB 2026-02-03 04:55:32.846542 | orchestrator | 2026-02-03 04:55:32.846557 | orchestrator | + echo 2026-02-03 04:55:32.846610 | orchestrator | + echo '# Status of MariaDB' 2026-02-03 04:55:32.846629 | orchestrator | + echo 2026-02-03 04:55:32.847306 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-03 04:55:32.915757 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-03 04:55:32.915834 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-03 04:55:32.915842 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-03 04:55:32.915850 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-03 04:55:33.011141 | orchestrator | Reading package lists... 2026-02-03 04:55:33.445549 | orchestrator | Building dependency tree... 2026-02-03 04:55:33.445674 | orchestrator | Reading state information... 2026-02-03 04:55:34.003024 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-03 04:55:34.003147 | orchestrator | bc set to manually installed. 2026-02-03 04:55:34.003162 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-03 04:55:34.701513 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-03 04:55:34.701605 | orchestrator | 2026-02-03 04:55:34.701620 | orchestrator | # Status of Prometheus 2026-02-03 04:55:34.701632 | orchestrator | 2026-02-03 04:55:34.701643 | orchestrator | + echo 2026-02-03 04:55:34.701655 | orchestrator | + echo '# Status of Prometheus' 2026-02-03 04:55:34.701666 | orchestrator | + echo 2026-02-03 04:55:34.701678 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-03 04:55:34.757535 | orchestrator | Unauthorized 2026-02-03 04:55:34.761662 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-03 04:55:34.816413 | orchestrator | Unauthorized 2026-02-03 04:55:34.825232 | orchestrator | 2026-02-03 04:55:34.825356 | orchestrator | # Status of RabbitMQ 2026-02-03 04:55:34.825383 | orchestrator | 2026-02-03 04:55:34.825404 | orchestrator | + echo 2026-02-03 04:55:34.825423 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-03 04:55:34.825442 | orchestrator | + echo 2026-02-03 04:55:34.825772 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-03 04:55:34.880554 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-03 04:55:34.880646 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-03 04:55:34.880662 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-03 04:55:35.382914 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-03 04:55:35.393098 | orchestrator | 2026-02-03 04:55:35.393181 | orchestrator | # Status of Redis 2026-02-03 04:55:35.393201 | orchestrator | 2026-02-03 04:55:35.393216 | orchestrator | + echo 2026-02-03 04:55:35.393230 | orchestrator | + echo '# Status of Redis' 2026-02-03 04:55:35.393246 | orchestrator | + echo 2026-02-03 04:55:35.393262 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-03 04:55:35.398905 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001371s;;;0.000000;10.000000 2026-02-03 04:55:35.399710 | orchestrator | 2026-02-03 04:55:35.399734 | orchestrator | # Create backup of MariaDB database 2026-02-03 04:55:35.399745 | orchestrator | 2026-02-03 04:55:35.399755 | orchestrator | + popd 2026-02-03 04:55:35.399765 | orchestrator | + echo 2026-02-03 04:55:35.399775 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-03 04:55:35.399785 | orchestrator | + echo 2026-02-03 04:55:35.399795 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-03 04:55:37.657065 | orchestrator | 2026-02-03 04:55:37 | INFO  | Task b4470cd5-de61-483b-97b5-7a106074e4d1 (mariadb_backup) was prepared for execution. 2026-02-03 04:55:37.657190 | orchestrator | 2026-02-03 04:55:37 | INFO  | It takes a moment until task b4470cd5-de61-483b-97b5-7a106074e4d1 (mariadb_backup) has been started and output is visible here. 2026-02-03 04:58:17.642990 | orchestrator | 2026-02-03 04:58:17.643109 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 04:58:17.643125 | orchestrator | 2026-02-03 04:58:17.643137 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 04:58:17.643149 | orchestrator | Tuesday 03 February 2026 04:55:42 +0000 (0:00:00.196) 0:00:00.196 ****** 2026-02-03 04:58:17.643161 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:58:17.643173 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:58:17.643184 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:58:17.643195 | orchestrator | 2026-02-03 04:58:17.643206 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 04:58:17.643242 | orchestrator | Tuesday 03 February 2026 04:55:42 +0000 (0:00:00.355) 0:00:00.552 ****** 2026-02-03 04:58:17.643254 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-03 04:58:17.643265 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-03 04:58:17.643276 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-03 04:58:17.643287 | orchestrator | 2026-02-03 04:58:17.643298 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-03 04:58:17.643309 | orchestrator | 2026-02-03 04:58:17.643320 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-03 04:58:17.643330 | orchestrator | Tuesday 03 February 2026 04:55:43 +0000 (0:00:00.627) 0:00:01.179 ****** 2026-02-03 04:58:17.643347 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 04:58:17.643372 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-03 04:58:17.643399 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-03 04:58:17.643416 | orchestrator | 2026-02-03 04:58:17.643434 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-03 04:58:17.643452 | orchestrator | Tuesday 03 February 2026 04:55:43 +0000 (0:00:00.432) 0:00:01.611 ****** 2026-02-03 04:58:17.643470 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 04:58:17.643518 | orchestrator | 2026-02-03 04:58:17.643537 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-03 04:58:17.643575 | orchestrator | Tuesday 03 February 2026 04:55:44 +0000 (0:00:00.610) 0:00:02.222 ****** 2026-02-03 04:58:17.643594 | orchestrator | ok: [testbed-node-0] 2026-02-03 04:58:17.643611 | orchestrator | ok: [testbed-node-2] 2026-02-03 04:58:17.643628 | orchestrator | ok: [testbed-node-1] 2026-02-03 04:58:17.643646 | orchestrator | 2026-02-03 04:58:17.643665 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-03 04:58:17.643681 | orchestrator | Tuesday 03 February 2026 04:55:48 +0000 (0:00:04.124) 0:00:06.347 ****** 2026-02-03 04:58:17.643698 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:58:17.643717 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:58:17.643737 | orchestrator | 2026-02-03 04:58:17.643754 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-02-03 04:58:17.643773 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-03 04:58:17.643791 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-03 04:58:17.643814 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-03 04:58:17.643834 | orchestrator | mariadb_bootstrap_restart 2026-02-03 04:58:17.643923 | orchestrator | changed: [testbed-node-0] 2026-02-03 04:58:17.643946 | orchestrator | 2026-02-03 04:58:17.643966 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-03 04:58:17.643986 | orchestrator | skipping: no hosts matched 2026-02-03 04:58:17.644005 | orchestrator | 2026-02-03 04:58:17.644024 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-03 04:58:17.644042 | orchestrator | skipping: no hosts matched 2026-02-03 04:58:17.644061 | orchestrator | 2026-02-03 04:58:17.644080 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-03 04:58:17.644100 | orchestrator | skipping: no hosts matched 2026-02-03 04:58:17.644118 | orchestrator | 2026-02-03 04:58:17.644138 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-03 04:58:17.644156 | orchestrator | 2026-02-03 04:58:17.644175 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-03 04:58:17.644193 | orchestrator | Tuesday 03 February 2026 04:58:16 +0000 (0:02:28.062) 0:02:34.410 ****** 2026-02-03 04:58:17.644211 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:58:17.644230 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:58:17.644266 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:58:17.644285 | orchestrator | 2026-02-03 04:58:17.644304 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-03 04:58:17.644321 | orchestrator | Tuesday 03 February 2026 04:58:16 +0000 (0:00:00.325) 0:02:34.736 ****** 2026-02-03 04:58:17.644333 | orchestrator | skipping: [testbed-node-0] 2026-02-03 04:58:17.644343 | orchestrator | skipping: [testbed-node-1] 2026-02-03 04:58:17.644354 | orchestrator | skipping: [testbed-node-2] 2026-02-03 04:58:17.644365 | orchestrator | 2026-02-03 04:58:17.644376 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 04:58:17.644388 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 04:58:17.644401 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 04:58:17.644412 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 04:58:17.644423 | orchestrator | 2026-02-03 04:58:17.644434 | orchestrator | 2026-02-03 04:58:17.644445 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 04:58:17.644456 | orchestrator | Tuesday 03 February 2026 04:58:17 +0000 (0:00:00.451) 0:02:35.187 ****** 2026-02-03 04:58:17.644467 | orchestrator | =============================================================================== 2026-02-03 04:58:17.644558 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 148.06s 2026-02-03 04:58:17.644582 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 4.12s 2026-02-03 04:58:17.644596 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-02-03 04:58:17.644606 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.61s 2026-02-03 04:58:17.644617 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.45s 2026-02-03 04:58:17.644628 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2026-02-03 04:58:17.644639 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-02-03 04:58:17.644651 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2026-02-03 04:58:18.014694 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-03 04:58:18.021803 | orchestrator | + set -e 2026-02-03 04:58:18.021852 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 04:58:18.022911 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 04:58:18.022935 | orchestrator | ++ INTERACTIVE=false 2026-02-03 04:58:18.022947 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 04:58:18.022958 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 04:58:18.022969 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-03 04:58:18.024807 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-03 04:58:18.031923 | orchestrator | 2026-02-03 04:58:18.031969 | orchestrator | # OpenStack endpoints 2026-02-03 04:58:18.031984 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 04:58:18.031996 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 04:58:18.032007 | orchestrator | + export OS_CLOUD=admin 2026-02-03 04:58:18.032018 | orchestrator | + OS_CLOUD=admin 2026-02-03 04:58:18.032032 | orchestrator | + echo 2026-02-03 04:58:18.032051 | orchestrator | + echo '# OpenStack endpoints' 2026-02-03 04:58:18.032069 | orchestrator | 2026-02-03 04:58:18.032088 | orchestrator | + echo 2026-02-03 04:58:18.032104 | orchestrator | + openstack endpoint list 2026-02-03 04:58:21.417892 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-03 04:58:21.417977 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-03 04:58:21.417988 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-03 04:58:21.418061 | orchestrator | | 05f9ffcfcc264b638b49bfeb1fe0b549 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-03 04:58:21.418086 | orchestrator | | 076f6fd1ef7e4848b51fec656cdcebbd | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-03 04:58:21.418095 | orchestrator | | 08d952d5feea4309870e599c63e4c545 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-03 04:58:21.418102 | orchestrator | | 0b5eea9ee63e4763a87012ca737ea781 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-03 04:58:21.418110 | orchestrator | | 0dcaca4cabd64646b9abbe4987efccb3 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-03 04:58:21.418117 | orchestrator | | 261112a6d69145f99fa163a7d308d08d | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-03 04:58:21.418125 | orchestrator | | 2da49b879f024ec19b6aabce8d73ee25 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-03 04:58:21.418132 | orchestrator | | 422cb1ee26634d3e83b8691765823245 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-03 04:58:21.418139 | orchestrator | | 4350f0a2232d4c67af6750a47fbf5016 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-03 04:58:21.418146 | orchestrator | | 4d8c07e120fe4a51aae47a82f95abf6f | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-03 04:58:21.418154 | orchestrator | | 5cafe224cd434bd58ea8cc8f75842df2 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-03 04:58:21.418161 | orchestrator | | 6797b1bbcb594b8189979022c3d23d50 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-03 04:58:21.418168 | orchestrator | | 71d461a612014a0eaa3ed78e772d1360 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-03 04:58:21.418176 | orchestrator | | 8868021d6c7e413f8517cb179dff62cf | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-03 04:58:21.418183 | orchestrator | | 93992719bd0c496da8791de969948080 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-03 04:58:21.418190 | orchestrator | | 95005ebe5aac4eae8cf12fde5e0470cb | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-03 04:58:21.418197 | orchestrator | | 966a47a24d944b43aec25b4018c0ec2a | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-03 04:58:21.418205 | orchestrator | | a21a02155cf347758200e828868875c1 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-03 04:58:21.418212 | orchestrator | | a73bd554c2c340ce8aa90e9e5f0395f5 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-03 04:58:21.418219 | orchestrator | | a8fc35e47060485fbb8231cd98dee57f | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-03 04:58:21.418249 | orchestrator | | ad9462e4e94447b19d6b7151df880425 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-03 04:58:21.418258 | orchestrator | | ae475fc648d747d18c0664d0f2411832 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-03 04:58:21.418269 | orchestrator | | c20b6ff24c624e9a8a76fb9dd6e15801 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-03 04:58:21.418276 | orchestrator | | c64df92b49f94fc48f595fbd4f4e8a0c | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-03 04:58:21.418283 | orchestrator | | d78a67f9e9104bc2bbd8f2f623ab71f8 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-03 04:58:21.418291 | orchestrator | | dd288e69f9a94573b8e6448f76b6bead | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-03 04:58:21.418298 | orchestrator | | eb676144960849f09d78ec0a578e25ef | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-03 04:58:21.418305 | orchestrator | | edf87b37101d4aca838707b22fea2520 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-03 04:58:21.418312 | orchestrator | | ee0fb53581384bbaa60e4a7f6337093a | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-03 04:58:21.418320 | orchestrator | | fb57afa3a5f94608b450411f62c00324 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-03 04:58:21.418327 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-03 04:58:21.732058 | orchestrator | 2026-02-03 04:58:21.732165 | orchestrator | # Cinder 2026-02-03 04:58:21.732188 | orchestrator | 2026-02-03 04:58:21.732208 | orchestrator | + echo 2026-02-03 04:58:21.732228 | orchestrator | + echo '# Cinder' 2026-02-03 04:58:21.732248 | orchestrator | + echo 2026-02-03 04:58:21.732267 | orchestrator | + openstack volume service list 2026-02-03 04:58:24.577229 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-03 04:58:24.577333 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-03 04:58:24.577349 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-03 04:58:24.577361 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-03T04:58:20.000000 | 2026-02-03 04:58:24.577372 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-03T04:58:20.000000 | 2026-02-03 04:58:24.577384 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-03T04:58:21.000000 | 2026-02-03 04:58:24.577395 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-03T04:58:20.000000 | 2026-02-03 04:58:24.577406 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-03T04:58:17.000000 | 2026-02-03 04:58:24.577417 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-03T04:58:19.000000 | 2026-02-03 04:58:24.577428 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-03T04:58:21.000000 | 2026-02-03 04:58:24.577439 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-03T04:58:23.000000 | 2026-02-03 04:58:24.577475 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-03T04:58:24.000000 | 2026-02-03 04:58:24.577487 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-03 04:58:24.871349 | orchestrator | 2026-02-03 04:58:24.871424 | orchestrator | # Neutron 2026-02-03 04:58:24.871433 | orchestrator | 2026-02-03 04:58:24.871440 | orchestrator | + echo 2026-02-03 04:58:24.871447 | orchestrator | + echo '# Neutron' 2026-02-03 04:58:24.871454 | orchestrator | + echo 2026-02-03 04:58:24.871461 | orchestrator | + openstack network agent list 2026-02-03 04:58:27.686695 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-03 04:58:27.686784 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-03 04:58:27.686796 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-03 04:58:27.686804 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-03 04:58:27.686812 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-03 04:58:27.686819 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-03 04:58:27.686844 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-03 04:58:27.686852 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-03 04:58:27.686859 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-03 04:58:27.686866 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-03 04:58:27.686873 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-03 04:58:27.686881 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-03 04:58:27.686888 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-03 04:58:27.991676 | orchestrator | + openstack network service provider list 2026-02-03 04:58:30.632753 | orchestrator | +---------------+------+---------+ 2026-02-03 04:58:30.632872 | orchestrator | | Service Type | Name | Default | 2026-02-03 04:58:30.632890 | orchestrator | +---------------+------+---------+ 2026-02-03 04:58:30.632902 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-03 04:58:30.632913 | orchestrator | +---------------+------+---------+ 2026-02-03 04:58:30.943586 | orchestrator | 2026-02-03 04:58:30.943691 | orchestrator | # Nova 2026-02-03 04:58:30.943716 | orchestrator | 2026-02-03 04:58:30.943736 | orchestrator | + echo 2026-02-03 04:58:30.943755 | orchestrator | + echo '# Nova' 2026-02-03 04:58:30.943774 | orchestrator | + echo 2026-02-03 04:58:30.943793 | orchestrator | + openstack compute service list 2026-02-03 04:58:33.780380 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-03 04:58:33.780465 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-03 04:58:33.780475 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-03 04:58:33.780525 | orchestrator | | 6543711a-cc8a-4dec-ba01-59cb77627316 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-03T04:58:25.000000 | 2026-02-03 04:58:33.780533 | orchestrator | | d68d5ef2-57ac-4723-a252-94f5a1ae3627 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-03T04:58:30.000000 | 2026-02-03 04:58:33.780540 | orchestrator | | 24d09258-2e74-4fec-9c6e-fbd0318f0a7e | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-03T04:58:31.000000 | 2026-02-03 04:58:33.780546 | orchestrator | | 56ba6072-1eb0-4fef-990f-846563e8f2e6 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-03T04:58:23.000000 | 2026-02-03 04:58:33.780553 | orchestrator | | 23e8ee2c-65a4-4bac-ae2f-1bc3d568d23d | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-03T04:58:25.000000 | 2026-02-03 04:58:33.780559 | orchestrator | | 35cc0948-2c81-4776-807a-b71dc0cf9bdf | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-03T04:58:25.000000 | 2026-02-03 04:58:33.780566 | orchestrator | | be110bec-a503-4d45-8f55-f3021468a6f5 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-03T04:58:24.000000 | 2026-02-03 04:58:33.780572 | orchestrator | | 45704af5-b1ff-4ba4-bee6-17372929872c | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-03T04:58:25.000000 | 2026-02-03 04:58:33.780578 | orchestrator | | 543c603d-a723-4d11-9e9b-fd33b8213725 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-03T04:58:25.000000 | 2026-02-03 04:58:33.780585 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-03 04:58:34.135070 | orchestrator | + openstack hypervisor list 2026-02-03 04:58:36.952317 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-03 04:58:36.952431 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-03 04:58:36.952447 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-03 04:58:36.952459 | orchestrator | | ea6d6039-44f5-483a-a8a6-28a93125696f | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-03 04:58:36.952471 | orchestrator | | 448c37d1-2b5a-42b4-87c3-e1ce99dcffb1 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-03 04:58:36.952482 | orchestrator | | efab94c8-0cbd-4a65-b32e-f3848e15864e | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-03 04:58:36.952494 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-03 04:58:37.428672 | orchestrator | 2026-02-03 04:58:37.428782 | orchestrator | # Run OpenStack test play 2026-02-03 04:58:37.428800 | orchestrator | 2026-02-03 04:58:37.428812 | orchestrator | + echo 2026-02-03 04:58:37.428825 | orchestrator | + echo '# Run OpenStack test play' 2026-02-03 04:58:37.428837 | orchestrator | + echo 2026-02-03 04:58:37.428848 | orchestrator | + osism apply --environment openstack test 2026-02-03 04:58:39.594163 | orchestrator | 2026-02-03 04:58:39 | INFO  | Trying to run play test in environment openstack 2026-02-03 04:58:49.741702 | orchestrator | 2026-02-03 04:58:49 | INFO  | Task cd8ec280-7aa8-46cf-a268-12b4e85d226f (test) was prepared for execution. 2026-02-03 04:58:49.741784 | orchestrator | 2026-02-03 04:58:49 | INFO  | It takes a moment until task cd8ec280-7aa8-46cf-a268-12b4e85d226f (test) has been started and output is visible here. 2026-02-03 05:01:34.123634 | orchestrator | 2026-02-03 05:01:34.123807 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-03 05:01:34.123825 | orchestrator | 2026-02-03 05:01:34.123837 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-03 05:01:34.123850 | orchestrator | Tuesday 03 February 2026 04:58:54 +0000 (0:00:00.095) 0:00:00.095 ****** 2026-02-03 05:01:34.123861 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.123873 | orchestrator | 2026-02-03 05:01:34.123884 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-03 05:01:34.123895 | orchestrator | Tuesday 03 February 2026 04:58:58 +0000 (0:00:04.018) 0:00:04.114 ****** 2026-02-03 05:01:34.123928 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.123939 | orchestrator | 2026-02-03 05:01:34.123950 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-03 05:01:34.123961 | orchestrator | Tuesday 03 February 2026 04:59:03 +0000 (0:00:04.438) 0:00:08.552 ****** 2026-02-03 05:01:34.123972 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.123982 | orchestrator | 2026-02-03 05:01:34.123993 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-03 05:01:34.124004 | orchestrator | Tuesday 03 February 2026 04:59:10 +0000 (0:00:07.022) 0:00:15.574 ****** 2026-02-03 05:01:34.124015 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124025 | orchestrator | 2026-02-03 05:01:34.124036 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-03 05:01:34.124047 | orchestrator | Tuesday 03 February 2026 04:59:14 +0000 (0:00:04.188) 0:00:19.762 ****** 2026-02-03 05:01:34.124057 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124068 | orchestrator | 2026-02-03 05:01:34.124079 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-03 05:01:34.124090 | orchestrator | Tuesday 03 February 2026 04:59:18 +0000 (0:00:04.436) 0:00:24.199 ****** 2026-02-03 05:01:34.124101 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-03 05:01:34.124112 | orchestrator | changed: [localhost] => (item=member) 2026-02-03 05:01:34.124124 | orchestrator | changed: [localhost] => (item=creator) 2026-02-03 05:01:34.124134 | orchestrator | 2026-02-03 05:01:34.124145 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-03 05:01:34.124156 | orchestrator | Tuesday 03 February 2026 04:59:31 +0000 (0:00:12.312) 0:00:36.511 ****** 2026-02-03 05:01:34.124167 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124180 | orchestrator | 2026-02-03 05:01:34.124193 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-03 05:01:34.124205 | orchestrator | Tuesday 03 February 2026 04:59:35 +0000 (0:00:04.432) 0:00:40.943 ****** 2026-02-03 05:01:34.124218 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124231 | orchestrator | 2026-02-03 05:01:34.124244 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-03 05:01:34.124257 | orchestrator | Tuesday 03 February 2026 04:59:40 +0000 (0:00:05.105) 0:00:46.049 ****** 2026-02-03 05:01:34.124270 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124284 | orchestrator | 2026-02-03 05:01:34.124297 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-03 05:01:34.124310 | orchestrator | Tuesday 03 February 2026 04:59:44 +0000 (0:00:04.339) 0:00:50.389 ****** 2026-02-03 05:01:34.124322 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124335 | orchestrator | 2026-02-03 05:01:34.124348 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-03 05:01:34.124360 | orchestrator | Tuesday 03 February 2026 04:59:49 +0000 (0:00:04.252) 0:00:54.641 ****** 2026-02-03 05:01:34.124373 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124386 | orchestrator | 2026-02-03 05:01:34.124399 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-03 05:01:34.124412 | orchestrator | Tuesday 03 February 2026 04:59:53 +0000 (0:00:04.627) 0:00:59.269 ****** 2026-02-03 05:01:34.124424 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124438 | orchestrator | 2026-02-03 05:01:34.124450 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-03 05:01:34.124462 | orchestrator | Tuesday 03 February 2026 04:59:58 +0000 (0:00:04.394) 0:01:03.663 ****** 2026-02-03 05:01:34.124476 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124488 | orchestrator | 2026-02-03 05:01:34.124501 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-03 05:01:34.124514 | orchestrator | Tuesday 03 February 2026 05:00:03 +0000 (0:00:05.130) 0:01:08.793 ****** 2026-02-03 05:01:34.124526 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124537 | orchestrator | 2026-02-03 05:01:34.124547 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-03 05:01:34.124571 | orchestrator | Tuesday 03 February 2026 05:00:09 +0000 (0:00:05.709) 0:01:14.503 ****** 2026-02-03 05:01:34.124588 | orchestrator | changed: [localhost] 2026-02-03 05:01:34.124606 | orchestrator | 2026-02-03 05:01:34.124625 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-03 05:01:34.124643 | orchestrator | 2026-02-03 05:01:34.124656 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-03 05:01:34.124667 | orchestrator | Tuesday 03 February 2026 05:00:21 +0000 (0:00:12.008) 0:01:26.511 ****** 2026-02-03 05:01:34.124678 | orchestrator | ok: [localhost] 2026-02-03 05:01:34.124734 | orchestrator | 2026-02-03 05:01:34.124754 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-03 05:01:34.124772 | orchestrator | Tuesday 03 February 2026 05:00:24 +0000 (0:00:03.880) 0:01:30.392 ****** 2026-02-03 05:01:34.124790 | orchestrator | skipping: [localhost] 2026-02-03 05:01:34.124802 | orchestrator | 2026-02-03 05:01:34.124813 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-03 05:01:34.124823 | orchestrator | Tuesday 03 February 2026 05:00:24 +0000 (0:00:00.057) 0:01:30.449 ****** 2026-02-03 05:01:34.124834 | orchestrator | skipping: [localhost] 2026-02-03 05:01:34.124845 | orchestrator | 2026-02-03 05:01:34.124855 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-03 05:01:34.124881 | orchestrator | Tuesday 03 February 2026 05:00:25 +0000 (0:00:00.057) 0:01:30.507 ****** 2026-02-03 05:01:34.124892 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-03 05:01:34.124903 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-03 05:01:34.124934 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-03 05:01:34.124946 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-03 05:01:34.124957 | orchestrator | skipping: [localhost] => (item=test)  2026-02-03 05:01:34.124968 | orchestrator | skipping: [localhost] 2026-02-03 05:01:34.124979 | orchestrator | 2026-02-03 05:01:34.124989 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-03 05:01:34.125000 | orchestrator | Tuesday 03 February 2026 05:00:25 +0000 (0:00:00.206) 0:01:30.713 ****** 2026-02-03 05:01:34.125011 | orchestrator | skipping: [localhost] 2026-02-03 05:01:34.125022 | orchestrator | 2026-02-03 05:01:34.125032 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-03 05:01:34.125043 | orchestrator | Tuesday 03 February 2026 05:00:25 +0000 (0:00:00.156) 0:01:30.870 ****** 2026-02-03 05:01:34.125054 | orchestrator | changed: [localhost] => (item=test) 2026-02-03 05:01:34.125064 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-03 05:01:34.125075 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-03 05:01:34.125086 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-03 05:01:34.125097 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-03 05:01:34.125107 | orchestrator | 2026-02-03 05:01:34.125118 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-03 05:01:34.125129 | orchestrator | Tuesday 03 February 2026 05:00:30 +0000 (0:00:05.277) 0:01:36.148 ****** 2026-02-03 05:01:34.125140 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-03 05:01:34.125152 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-03 05:01:34.125162 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-03 05:01:34.125173 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-03 05:01:34.125186 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j870799985538.3759', 'results_file': '/ansible/.ansible_async/j870799985538.3759', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-03 05:01:34.125200 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j298529005371.3784', 'results_file': '/ansible/.ansible_async/j298529005371.3784', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-03 05:01:34.125220 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j27622378545.3809', 'results_file': '/ansible/.ansible_async/j27622378545.3809', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-03 05:01:34.125231 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j518080191734.3834', 'results_file': '/ansible/.ansible_async/j518080191734.3834', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-03 05:01:34.125242 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j770723123399.3859', 'results_file': '/ansible/.ansible_async/j770723123399.3859', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-03 05:01:34.125253 | orchestrator | 2026-02-03 05:01:34.125264 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-03 05:01:34.125275 | orchestrator | Tuesday 03 February 2026 05:01:18 +0000 (0:00:47.807) 0:02:23.955 ****** 2026-02-03 05:01:34.125286 | orchestrator | changed: [localhost] => (item=test) 2026-02-03 05:01:34.125297 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-03 05:01:34.125308 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-03 05:01:34.125318 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-03 05:01:34.125329 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-03 05:01:34.125340 | orchestrator | 2026-02-03 05:01:34.125351 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-03 05:01:34.125362 | orchestrator | Tuesday 03 February 2026 05:01:24 +0000 (0:00:05.551) 0:02:29.507 ****** 2026-02-03 05:01:34.125372 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-03 05:01:34.125384 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j624615590693.3956', 'results_file': '/ansible/.ansible_async/j624615590693.3956', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-03 05:01:34.125395 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j172725928391.3981', 'results_file': '/ansible/.ansible_async/j172725928391.3981', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-03 05:01:34.125406 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j161633431963.4006', 'results_file': '/ansible/.ansible_async/j161633431963.4006', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-03 05:01:34.125432 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j214600151572.4031', 'results_file': '/ansible/.ansible_async/j214600151572.4031', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-03 05:02:15.923299 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j8639297295.4056', 'results_file': '/ansible/.ansible_async/j8639297295.4056', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-03 05:02:15.923413 | orchestrator | 2026-02-03 05:02:15.923431 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-03 05:02:15.923445 | orchestrator | Tuesday 03 February 2026 05:01:34 +0000 (0:00:10.078) 0:02:39.585 ****** 2026-02-03 05:02:15.923457 | orchestrator | changed: [localhost] => (item=test) 2026-02-03 05:02:15.923469 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-03 05:02:15.923481 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-03 05:02:15.923491 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-03 05:02:15.923503 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-03 05:02:15.923514 | orchestrator | 2026-02-03 05:02:15.923549 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-03 05:02:15.923561 | orchestrator | Tuesday 03 February 2026 05:01:39 +0000 (0:00:05.217) 0:02:44.802 ****** 2026-02-03 05:02:15.923573 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-03 05:02:15.923586 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j278645698075.4132', 'results_file': '/ansible/.ansible_async/j278645698075.4132', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-03 05:02:15.923598 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j574101546239.4157', 'results_file': '/ansible/.ansible_async/j574101546239.4157', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-03 05:02:15.923610 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j763101361776.4183', 'results_file': '/ansible/.ansible_async/j763101361776.4183', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-03 05:02:15.923621 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j368073655011.4209', 'results_file': '/ansible/.ansible_async/j368073655011.4209', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-03 05:02:15.923632 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j725704262040.4235', 'results_file': '/ansible/.ansible_async/j725704262040.4235', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-03 05:02:15.923643 | orchestrator | 2026-02-03 05:02:15.923654 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-03 05:02:15.923665 | orchestrator | Tuesday 03 February 2026 05:01:49 +0000 (0:00:10.162) 0:02:54.965 ****** 2026-02-03 05:02:15.923676 | orchestrator | changed: [localhost] 2026-02-03 05:02:15.923688 | orchestrator | 2026-02-03 05:02:15.923699 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-03 05:02:15.923710 | orchestrator | Tuesday 03 February 2026 05:01:56 +0000 (0:00:06.825) 0:03:01.790 ****** 2026-02-03 05:02:15.923721 | orchestrator | changed: [localhost] 2026-02-03 05:02:15.923762 | orchestrator | 2026-02-03 05:02:15.923773 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-03 05:02:15.923799 | orchestrator | Tuesday 03 February 2026 05:02:10 +0000 (0:00:13.927) 0:03:15.718 ****** 2026-02-03 05:02:15.923811 | orchestrator | ok: [localhost] 2026-02-03 05:02:15.923822 | orchestrator | 2026-02-03 05:02:15.923846 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-03 05:02:15.923861 | orchestrator | Tuesday 03 February 2026 05:02:15 +0000 (0:00:05.318) 0:03:21.036 ****** 2026-02-03 05:02:15.923874 | orchestrator | ok: [localhost] => { 2026-02-03 05:02:15.923888 | orchestrator |  "msg": "192.168.112.141" 2026-02-03 05:02:15.923901 | orchestrator | } 2026-02-03 05:02:15.923914 | orchestrator | 2026-02-03 05:02:15.923927 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:02:15.923941 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 05:02:15.923956 | orchestrator | 2026-02-03 05:02:15.923969 | orchestrator | 2026-02-03 05:02:15.923982 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:02:15.923996 | orchestrator | Tuesday 03 February 2026 05:02:15 +0000 (0:00:00.048) 0:03:21.084 ****** 2026-02-03 05:02:15.924009 | orchestrator | =============================================================================== 2026-02-03 05:02:15.924023 | orchestrator | Wait for instance creation to complete --------------------------------- 47.81s 2026-02-03 05:02:15.924036 | orchestrator | Attach test volume ----------------------------------------------------- 13.93s 2026-02-03 05:02:15.924049 | orchestrator | Add member roles to user test ------------------------------------------ 12.31s 2026-02-03 05:02:15.924086 | orchestrator | Create test router ----------------------------------------------------- 12.01s 2026-02-03 05:02:15.924100 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.16s 2026-02-03 05:02:15.924113 | orchestrator | Wait for metadata to be added ------------------------------------------ 10.08s 2026-02-03 05:02:15.924127 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.02s 2026-02-03 05:02:15.924158 | orchestrator | Create test volume ------------------------------------------------------ 6.83s 2026-02-03 05:02:15.924173 | orchestrator | Create test subnet ------------------------------------------------------ 5.71s 2026-02-03 05:02:15.924185 | orchestrator | Add metadata to instances ----------------------------------------------- 5.55s 2026-02-03 05:02:15.924196 | orchestrator | Create floating ip address ---------------------------------------------- 5.32s 2026-02-03 05:02:15.924207 | orchestrator | Create test instances --------------------------------------------------- 5.28s 2026-02-03 05:02:15.924218 | orchestrator | Add tag to instances ---------------------------------------------------- 5.22s 2026-02-03 05:02:15.924229 | orchestrator | Create test network ----------------------------------------------------- 5.13s 2026-02-03 05:02:15.924239 | orchestrator | Create ssh security group ----------------------------------------------- 5.11s 2026-02-03 05:02:15.924250 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.63s 2026-02-03 05:02:15.924261 | orchestrator | Create test-admin user -------------------------------------------------- 4.44s 2026-02-03 05:02:15.924272 | orchestrator | Create test user -------------------------------------------------------- 4.44s 2026-02-03 05:02:15.924283 | orchestrator | Create test server group ------------------------------------------------ 4.43s 2026-02-03 05:02:15.924294 | orchestrator | Create test keypair ----------------------------------------------------- 4.39s 2026-02-03 05:02:16.309181 | orchestrator | + server_list 2026-02-03 05:02:16.309299 | orchestrator | + openstack --os-cloud test server list 2026-02-03 05:02:20.104478 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-03 05:02:20.104581 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-03 05:02:20.104597 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-03 05:02:20.104609 | orchestrator | | f8131f44-6032-4ae8-a980-e57ec860a9a9 | test-4 | ACTIVE | test=192.168.112.130, 192.168.200.216 | N/A (booted from volume) | SCS-1L-1 | 2026-02-03 05:02:20.104621 | orchestrator | | 4115e639-04eb-4aa5-bc2a-8bc9c0d3c0f9 | test-3 | ACTIVE | test=192.168.112.148, 192.168.200.227 | N/A (booted from volume) | SCS-1L-1 | 2026-02-03 05:02:20.104632 | orchestrator | | 62dafd62-ef04-49c6-8a92-6f5aeb442d51 | test-1 | ACTIVE | test=192.168.112.137, 192.168.200.211 | N/A (booted from volume) | SCS-1L-1 | 2026-02-03 05:02:20.104643 | orchestrator | | ef5b51d0-17a0-4aeb-a93c-919f967d6e79 | test-2 | ACTIVE | test=192.168.112.177, 192.168.200.26 | N/A (booted from volume) | SCS-1L-1 | 2026-02-03 05:02:20.104654 | orchestrator | | f105fc69-4abf-4a0a-b549-ce262a6f2d42 | test | ACTIVE | test=192.168.112.141, 192.168.200.171 | N/A (booted from volume) | SCS-1L-1 | 2026-02-03 05:02:20.104665 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-03 05:02:20.508538 | orchestrator | + openstack --os-cloud test server show test 2026-02-03 05:02:23.859337 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:23.859484 | orchestrator | | Field | Value | 2026-02-03 05:02:23.859541 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:23.859571 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-03 05:02:23.859589 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-03 05:02:23.859607 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-03 05:02:23.859625 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-03 05:02:23.859641 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-03 05:02:23.859658 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-03 05:02:23.859699 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-03 05:02:23.859717 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-03 05:02:23.859774 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-03 05:02:23.859828 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-03 05:02:23.859855 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-03 05:02:23.859875 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-03 05:02:23.859893 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-03 05:02:23.859911 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-03 05:02:23.859929 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-03 05:02:23.859946 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-03T05:01:01.000000 | 2026-02-03 05:02:23.859975 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-03 05:02:23.860011 | orchestrator | | accessIPv4 | | 2026-02-03 05:02:23.860032 | orchestrator | | accessIPv6 | | 2026-02-03 05:02:23.860049 | orchestrator | | addresses | test=192.168.112.141, 192.168.200.171 | 2026-02-03 05:02:23.860074 | orchestrator | | config_drive | | 2026-02-03 05:02:23.860094 | orchestrator | | created | 2026-02-03T05:00:34Z | 2026-02-03 05:02:23.860111 | orchestrator | | description | None | 2026-02-03 05:02:23.860128 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-03 05:02:23.860144 | orchestrator | | hostId | 773aa03521b4efd257b57c593b4ba307fdff75cdc9214dc785668bf6 | 2026-02-03 05:02:23.860161 | orchestrator | | host_status | None | 2026-02-03 05:02:23.860201 | orchestrator | | id | f105fc69-4abf-4a0a-b549-ce262a6f2d42 | 2026-02-03 05:02:23.860234 | orchestrator | | image | N/A (booted from volume) | 2026-02-03 05:02:23.860252 | orchestrator | | key_name | test | 2026-02-03 05:02:23.860268 | orchestrator | | locked | False | 2026-02-03 05:02:23.860285 | orchestrator | | locked_reason | None | 2026-02-03 05:02:23.860301 | orchestrator | | name | test | 2026-02-03 05:02:23.860318 | orchestrator | | pinned_availability_zone | None | 2026-02-03 05:02:23.860335 | orchestrator | | progress | 0 | 2026-02-03 05:02:23.860352 | orchestrator | | project_id | af2120dc4c3d41498ca6943549e34e21 | 2026-02-03 05:02:23.860386 | orchestrator | | properties | hostname='test' | 2026-02-03 05:02:23.860439 | orchestrator | | security_groups | name='icmp' | 2026-02-03 05:02:23.860461 | orchestrator | | | name='ssh' | 2026-02-03 05:02:23.860479 | orchestrator | | server_groups | None | 2026-02-03 05:02:23.860496 | orchestrator | | status | ACTIVE | 2026-02-03 05:02:23.860529 | orchestrator | | tags | test | 2026-02-03 05:02:23.860548 | orchestrator | | trusted_image_certificates | None | 2026-02-03 05:02:23.860567 | orchestrator | | updated | 2026-02-03T05:01:25Z | 2026-02-03 05:02:23.860585 | orchestrator | | user_id | 7692e927433c486fb33138e98566e1b4 | 2026-02-03 05:02:23.860605 | orchestrator | | volumes_attached | delete_on_termination='True', id='2e16392c-8af3-4dce-b037-62bce0caa01d' | 2026-02-03 05:02:23.860634 | orchestrator | | | delete_on_termination='False', id='625b3550-5ed6-4fa1-a8dd-34dc4dfe62c7' | 2026-02-03 05:02:23.863565 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:24.173858 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-03 05:02:27.486530 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:27.486637 | orchestrator | | Field | Value | 2026-02-03 05:02:27.486660 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:27.486673 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-03 05:02:27.486685 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-03 05:02:27.486696 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-03 05:02:27.486708 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-03 05:02:27.486778 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-03 05:02:27.486792 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-03 05:02:27.486821 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-03 05:02:27.486834 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-03 05:02:27.486846 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-03 05:02:27.486862 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-03 05:02:27.486874 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-03 05:02:27.486885 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-03 05:02:27.486897 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-03 05:02:27.486916 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-03 05:02:27.486928 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-03 05:02:27.486939 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-03T05:01:01.000000 | 2026-02-03 05:02:27.486958 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-03 05:02:27.486970 | orchestrator | | accessIPv4 | | 2026-02-03 05:02:27.486982 | orchestrator | | accessIPv6 | | 2026-02-03 05:02:27.486998 | orchestrator | | addresses | test=192.168.112.137, 192.168.200.211 | 2026-02-03 05:02:27.487010 | orchestrator | | config_drive | | 2026-02-03 05:02:27.487021 | orchestrator | | created | 2026-02-03T05:00:36Z | 2026-02-03 05:02:27.487038 | orchestrator | | description | None | 2026-02-03 05:02:27.487050 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-03 05:02:27.487062 | orchestrator | | hostId | c14ffc407c5a667cea0aba8c06c3a158e17a0afaa68b536ccec32b52 | 2026-02-03 05:02:27.487078 | orchestrator | | host_status | None | 2026-02-03 05:02:27.487097 | orchestrator | | id | 62dafd62-ef04-49c6-8a92-6f5aeb442d51 | 2026-02-03 05:02:27.487111 | orchestrator | | image | N/A (booted from volume) | 2026-02-03 05:02:27.487125 | orchestrator | | key_name | test | 2026-02-03 05:02:27.487143 | orchestrator | | locked | False | 2026-02-03 05:02:27.487157 | orchestrator | | locked_reason | None | 2026-02-03 05:02:27.487171 | orchestrator | | name | test-1 | 2026-02-03 05:02:27.487190 | orchestrator | | pinned_availability_zone | None | 2026-02-03 05:02:27.487205 | orchestrator | | progress | 0 | 2026-02-03 05:02:27.487218 | orchestrator | | project_id | af2120dc4c3d41498ca6943549e34e21 | 2026-02-03 05:02:27.487232 | orchestrator | | properties | hostname='test-1' | 2026-02-03 05:02:27.487254 | orchestrator | | security_groups | name='icmp' | 2026-02-03 05:02:27.487268 | orchestrator | | | name='ssh' | 2026-02-03 05:02:27.487281 | orchestrator | | server_groups | None | 2026-02-03 05:02:27.487296 | orchestrator | | status | ACTIVE | 2026-02-03 05:02:27.487311 | orchestrator | | tags | test | 2026-02-03 05:02:27.487332 | orchestrator | | trusted_image_certificates | None | 2026-02-03 05:02:27.487345 | orchestrator | | updated | 2026-02-03T05:01:26Z | 2026-02-03 05:02:27.487359 | orchestrator | | user_id | 7692e927433c486fb33138e98566e1b4 | 2026-02-03 05:02:27.487372 | orchestrator | | volumes_attached | delete_on_termination='True', id='e5af8dbd-18ee-4611-8eaa-36a58ae71e98' | 2026-02-03 05:02:27.491064 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:27.828952 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-03 05:02:31.037333 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:31.037441 | orchestrator | | Field | Value | 2026-02-03 05:02:31.037475 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:31.037492 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-03 05:02:31.037521 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-03 05:02:31.037532 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-03 05:02:31.037542 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-03 05:02:31.037553 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-03 05:02:31.037563 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-03 05:02:31.037590 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-03 05:02:31.037602 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-03 05:02:31.037612 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-03 05:02:31.037622 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-03 05:02:31.037644 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-03 05:02:31.037655 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-03 05:02:31.037665 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-03 05:02:31.037675 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-03 05:02:31.037685 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-03 05:02:31.037696 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-03T05:01:01.000000 | 2026-02-03 05:02:31.037712 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-03 05:02:31.037723 | orchestrator | | accessIPv4 | | 2026-02-03 05:02:31.037733 | orchestrator | | accessIPv6 | | 2026-02-03 05:02:31.037793 | orchestrator | | addresses | test=192.168.112.177, 192.168.200.26 | 2026-02-03 05:02:31.037809 | orchestrator | | config_drive | | 2026-02-03 05:02:31.037820 | orchestrator | | created | 2026-02-03T05:00:36Z | 2026-02-03 05:02:31.037830 | orchestrator | | description | None | 2026-02-03 05:02:31.037840 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-03 05:02:31.037850 | orchestrator | | hostId | 773aa03521b4efd257b57c593b4ba307fdff75cdc9214dc785668bf6 | 2026-02-03 05:02:31.037860 | orchestrator | | host_status | None | 2026-02-03 05:02:31.037877 | orchestrator | | id | ef5b51d0-17a0-4aeb-a93c-919f967d6e79 | 2026-02-03 05:02:31.037888 | orchestrator | | image | N/A (booted from volume) | 2026-02-03 05:02:31.037898 | orchestrator | | key_name | test | 2026-02-03 05:02:31.037919 | orchestrator | | locked | False | 2026-02-03 05:02:31.037929 | orchestrator | | locked_reason | None | 2026-02-03 05:02:31.037939 | orchestrator | | name | test-2 | 2026-02-03 05:02:31.037949 | orchestrator | | pinned_availability_zone | None | 2026-02-03 05:02:31.037959 | orchestrator | | progress | 0 | 2026-02-03 05:02:31.037969 | orchestrator | | project_id | af2120dc4c3d41498ca6943549e34e21 | 2026-02-03 05:02:31.037980 | orchestrator | | properties | hostname='test-2' | 2026-02-03 05:02:31.037996 | orchestrator | | security_groups | name='icmp' | 2026-02-03 05:02:31.038007 | orchestrator | | | name='ssh' | 2026-02-03 05:02:31.038075 | orchestrator | | server_groups | None | 2026-02-03 05:02:31.038091 | orchestrator | | status | ACTIVE | 2026-02-03 05:02:31.038132 | orchestrator | | tags | test | 2026-02-03 05:02:31.038143 | orchestrator | | trusted_image_certificates | None | 2026-02-03 05:02:31.038153 | orchestrator | | updated | 2026-02-03T05:01:27Z | 2026-02-03 05:02:31.038163 | orchestrator | | user_id | 7692e927433c486fb33138e98566e1b4 | 2026-02-03 05:02:31.038173 | orchestrator | | volumes_attached | delete_on_termination='True', id='f3ce58a8-f563-469a-978f-ab309a447942' | 2026-02-03 05:02:31.042191 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:31.411666 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-03 05:02:34.604813 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:34.604972 | orchestrator | | Field | Value | 2026-02-03 05:02:34.605004 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:34.605035 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-03 05:02:34.605048 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-03 05:02:34.605060 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-03 05:02:34.605072 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-03 05:02:34.605083 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-03 05:02:34.605095 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-03 05:02:34.605126 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-03 05:02:34.605146 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-03 05:02:34.605158 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-03 05:02:34.605169 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-03 05:02:34.605186 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-03 05:02:34.605198 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-03 05:02:34.605210 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-03 05:02:34.605221 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-03 05:02:34.605237 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-03 05:02:34.605256 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-03T05:01:03.000000 | 2026-02-03 05:02:34.605284 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-03 05:02:34.605316 | orchestrator | | accessIPv4 | | 2026-02-03 05:02:34.605336 | orchestrator | | accessIPv6 | | 2026-02-03 05:02:34.605355 | orchestrator | | addresses | test=192.168.112.148, 192.168.200.227 | 2026-02-03 05:02:34.605857 | orchestrator | | config_drive | | 2026-02-03 05:02:34.605889 | orchestrator | | created | 2026-02-03T05:00:37Z | 2026-02-03 05:02:34.605900 | orchestrator | | description | None | 2026-02-03 05:02:34.605912 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-03 05:02:34.605923 | orchestrator | | hostId | c14ffc407c5a667cea0aba8c06c3a158e17a0afaa68b536ccec32b52 | 2026-02-03 05:02:34.605935 | orchestrator | | host_status | None | 2026-02-03 05:02:34.605969 | orchestrator | | id | 4115e639-04eb-4aa5-bc2a-8bc9c0d3c0f9 | 2026-02-03 05:02:34.605991 | orchestrator | | image | N/A (booted from volume) | 2026-02-03 05:02:34.606011 | orchestrator | | key_name | test | 2026-02-03 05:02:34.606135 | orchestrator | | locked | False | 2026-02-03 05:02:34.606149 | orchestrator | | locked_reason | None | 2026-02-03 05:02:34.606189 | orchestrator | | name | test-3 | 2026-02-03 05:02:34.606201 | orchestrator | | pinned_availability_zone | None | 2026-02-03 05:02:34.606212 | orchestrator | | progress | 0 | 2026-02-03 05:02:34.606224 | orchestrator | | project_id | af2120dc4c3d41498ca6943549e34e21 | 2026-02-03 05:02:34.606245 | orchestrator | | properties | hostname='test-3' | 2026-02-03 05:02:34.606267 | orchestrator | | security_groups | name='icmp' | 2026-02-03 05:02:34.606285 | orchestrator | | | name='ssh' | 2026-02-03 05:02:34.606297 | orchestrator | | server_groups | None | 2026-02-03 05:02:34.606309 | orchestrator | | status | ACTIVE | 2026-02-03 05:02:34.606320 | orchestrator | | tags | test | 2026-02-03 05:02:34.606332 | orchestrator | | trusted_image_certificates | None | 2026-02-03 05:02:34.606343 | orchestrator | | updated | 2026-02-03T05:01:27Z | 2026-02-03 05:02:34.606361 | orchestrator | | user_id | 7692e927433c486fb33138e98566e1b4 | 2026-02-03 05:02:34.606395 | orchestrator | | volumes_attached | delete_on_termination='True', id='fbaa563a-d331-46ee-b30d-63eef766ce1c' | 2026-02-03 05:02:34.609133 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:34.934661 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-03 05:02:38.142437 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:38.142536 | orchestrator | | Field | Value | 2026-02-03 05:02:38.142550 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:38.142555 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-03 05:02:38.142559 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-03 05:02:38.142563 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-03 05:02:38.142567 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-03 05:02:38.142583 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-03 05:02:38.142588 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-03 05:02:38.142601 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-03 05:02:38.142606 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-03 05:02:38.142612 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-03 05:02:38.142616 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-03 05:02:38.142620 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-03 05:02:38.142624 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-03 05:02:38.142628 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-03 05:02:38.142632 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-03 05:02:38.142641 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-03 05:02:38.142645 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-03T05:01:02.000000 | 2026-02-03 05:02:38.142652 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-03 05:02:38.142658 | orchestrator | | accessIPv4 | | 2026-02-03 05:02:38.142662 | orchestrator | | accessIPv6 | | 2026-02-03 05:02:38.142666 | orchestrator | | addresses | test=192.168.112.130, 192.168.200.216 | 2026-02-03 05:02:38.142670 | orchestrator | | config_drive | | 2026-02-03 05:02:38.142674 | orchestrator | | created | 2026-02-03T05:00:38Z | 2026-02-03 05:02:38.142678 | orchestrator | | description | None | 2026-02-03 05:02:38.142686 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-03 05:02:38.142690 | orchestrator | | hostId | c14ffc407c5a667cea0aba8c06c3a158e17a0afaa68b536ccec32b52 | 2026-02-03 05:02:38.142694 | orchestrator | | host_status | None | 2026-02-03 05:02:38.142701 | orchestrator | | id | f8131f44-6032-4ae8-a980-e57ec860a9a9 | 2026-02-03 05:02:38.142708 | orchestrator | | image | N/A (booted from volume) | 2026-02-03 05:02:38.142712 | orchestrator | | key_name | test | 2026-02-03 05:02:38.142716 | orchestrator | | locked | False | 2026-02-03 05:02:38.142720 | orchestrator | | locked_reason | None | 2026-02-03 05:02:38.142724 | orchestrator | | name | test-4 | 2026-02-03 05:02:38.142732 | orchestrator | | pinned_availability_zone | None | 2026-02-03 05:02:38.142736 | orchestrator | | progress | 0 | 2026-02-03 05:02:38.142739 | orchestrator | | project_id | af2120dc4c3d41498ca6943549e34e21 | 2026-02-03 05:02:38.142743 | orchestrator | | properties | hostname='test-4' | 2026-02-03 05:02:38.142795 | orchestrator | | security_groups | name='icmp' | 2026-02-03 05:02:38.142806 | orchestrator | | | name='ssh' | 2026-02-03 05:02:38.142814 | orchestrator | | server_groups | None | 2026-02-03 05:02:38.142818 | orchestrator | | status | ACTIVE | 2026-02-03 05:02:38.142821 | orchestrator | | tags | test | 2026-02-03 05:02:38.142829 | orchestrator | | trusted_image_certificates | None | 2026-02-03 05:02:38.142833 | orchestrator | | updated | 2026-02-03T05:01:28Z | 2026-02-03 05:02:38.142837 | orchestrator | | user_id | 7692e927433c486fb33138e98566e1b4 | 2026-02-03 05:02:38.142841 | orchestrator | | volumes_attached | delete_on_termination='True', id='d4b8fb5c-0cd5-42ee-a2f7-4683c2bf42e5' | 2026-02-03 05:02:38.146597 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-03 05:02:38.460718 | orchestrator | + server_ping 2026-02-03 05:02:38.462560 | orchestrator | ++ tr -d '\r' 2026-02-03 05:02:38.462625 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-03 05:02:41.556521 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-03 05:02:41.556612 | orchestrator | + ping -c3 192.168.112.141 2026-02-03 05:02:41.575695 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2026-02-03 05:02:41.575822 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=10.1 ms 2026-02-03 05:02:42.569726 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.75 ms 2026-02-03 05:02:43.570861 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=1.97 ms 2026-02-03 05:02:43.571086 | orchestrator | 2026-02-03 05:02:43.571109 | orchestrator | --- 192.168.112.141 ping statistics --- 2026-02-03 05:02:43.571123 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-03 05:02:43.571134 | orchestrator | rtt min/avg/max/mdev = 1.971/4.950/10.134/3.679 ms 2026-02-03 05:02:43.571159 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-03 05:02:43.571171 | orchestrator | + ping -c3 192.168.112.130 2026-02-03 05:02:43.583827 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2026-02-03 05:02:43.583906 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=7.09 ms 2026-02-03 05:02:44.580604 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.55 ms 2026-02-03 05:02:45.581823 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.61 ms 2026-02-03 05:02:45.581926 | orchestrator | 2026-02-03 05:02:45.581941 | orchestrator | --- 192.168.112.130 ping statistics --- 2026-02-03 05:02:45.581955 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-03 05:02:45.581992 | orchestrator | rtt min/avg/max/mdev = 1.609/3.749/7.089/2.392 ms 2026-02-03 05:02:45.582006 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-03 05:02:45.582075 | orchestrator | + ping -c3 192.168.112.177 2026-02-03 05:02:45.595719 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2026-02-03 05:02:45.595868 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=8.18 ms 2026-02-03 05:02:46.591552 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.33 ms 2026-02-03 05:02:47.593151 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=2.05 ms 2026-02-03 05:02:47.593241 | orchestrator | 2026-02-03 05:02:47.593253 | orchestrator | --- 192.168.112.177 ping statistics --- 2026-02-03 05:02:47.593262 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-03 05:02:47.593334 | orchestrator | rtt min/avg/max/mdev = 2.052/4.187/8.181/2.826 ms 2026-02-03 05:02:47.593536 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-03 05:02:47.593551 | orchestrator | + ping -c3 192.168.112.148 2026-02-03 05:02:47.610066 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2026-02-03 05:02:47.610140 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=10.8 ms 2026-02-03 05:02:48.602590 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.33 ms 2026-02-03 05:02:49.604179 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=2.15 ms 2026-02-03 05:02:49.604281 | orchestrator | 2026-02-03 05:02:49.604298 | orchestrator | --- 192.168.112.148 ping statistics --- 2026-02-03 05:02:49.604311 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-03 05:02:49.604323 | orchestrator | rtt min/avg/max/mdev = 2.153/5.111/10.847/4.056 ms 2026-02-03 05:02:49.604866 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-03 05:02:49.604894 | orchestrator | + ping -c3 192.168.112.137 2026-02-03 05:02:49.617456 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2026-02-03 05:02:49.617521 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=7.31 ms 2026-02-03 05:02:50.614706 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.52 ms 2026-02-03 05:02:51.615934 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.66 ms 2026-02-03 05:02:51.616041 | orchestrator | 2026-02-03 05:02:51.616058 | orchestrator | --- 192.168.112.137 ping statistics --- 2026-02-03 05:02:51.616071 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-03 05:02:51.616083 | orchestrator | rtt min/avg/max/mdev = 1.663/3.831/7.308/2.483 ms 2026-02-03 05:02:51.616584 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-03 05:02:51.708363 | orchestrator | ok: Runtime: 0:10:27.606644 2026-02-03 05:02:51.749491 | 2026-02-03 05:02:51.749614 | TASK [Run tempest] 2026-02-03 05:02:52.286567 | orchestrator | skipping: Conditional result was False 2026-02-03 05:02:52.307683 | 2026-02-03 05:02:52.307915 | TASK [Check prometheus alert status] 2026-02-03 05:02:52.851643 | orchestrator | skipping: Conditional result was False 2026-02-03 05:02:52.866458 | 2026-02-03 05:02:52.866618 | PLAY [Upgrade testbed] 2026-02-03 05:02:52.877873 | 2026-02-03 05:02:52.878043 | TASK [Print next ceph version] 2026-02-03 05:02:52.989458 | orchestrator | ok 2026-02-03 05:02:53.002509 | 2026-02-03 05:02:53.002671 | TASK [Print next openstack version] 2026-02-03 05:02:53.081566 | orchestrator | ok 2026-02-03 05:02:53.092229 | 2026-02-03 05:02:53.092351 | TASK [Print next manager version] 2026-02-03 05:02:53.165255 | orchestrator | ok 2026-02-03 05:02:53.172589 | 2026-02-03 05:02:53.172701 | TASK [Set cloud fact (Zuul deployment)] 2026-02-03 05:02:53.225707 | orchestrator | ok 2026-02-03 05:02:53.235028 | 2026-02-03 05:02:53.235149 | TASK [Set cloud fact (local deployment)] 2026-02-03 05:02:53.259456 | orchestrator | skipping: Conditional result was False 2026-02-03 05:02:53.269433 | 2026-02-03 05:02:53.269547 | TASK [Fetch manager address] 2026-02-03 05:02:53.550223 | orchestrator | ok 2026-02-03 05:02:53.560857 | 2026-02-03 05:02:53.561015 | TASK [Set manager_host address] 2026-02-03 05:02:53.640304 | orchestrator | ok 2026-02-03 05:02:53.652496 | 2026-02-03 05:02:53.652630 | TASK [Run upgrade] 2026-02-03 05:02:54.436290 | orchestrator | + set -e 2026-02-03 05:02:54.436402 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-03 05:02:54.436412 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-03 05:02:54.436422 | orchestrator | + CEPH_VERSION=reef 2026-02-03 05:02:54.436427 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-03 05:02:54.436432 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-03 05:02:54.436442 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-03 05:02:54.448870 | orchestrator | + set -e 2026-02-03 05:02:54.448969 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 05:02:54.448989 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 05:02:54.449010 | orchestrator | ++ INTERACTIVE=false 2026-02-03 05:02:54.449022 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 05:02:54.449044 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 05:02:54.450230 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-03 05:02:54.488598 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-03 05:02:54.490078 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-03 05:02:54.536053 | orchestrator | 2026-02-03 05:02:54.536143 | orchestrator | # UPGRADE MANAGER 2026-02-03 05:02:54.536161 | orchestrator | 2026-02-03 05:02:54.536200 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-03 05:02:54.536214 | orchestrator | + echo 2026-02-03 05:02:54.536225 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-03 05:02:54.536237 | orchestrator | + echo 2026-02-03 05:02:54.536247 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-03 05:02:54.536257 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-03 05:02:54.536267 | orchestrator | + CEPH_VERSION=reef 2026-02-03 05:02:54.536277 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-03 05:02:54.536287 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-03 05:02:54.536362 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-03 05:02:54.545913 | orchestrator | + set -e 2026-02-03 05:02:54.546474 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-03 05:02:54.546520 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-03 05:02:54.554310 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-03 05:02:54.554402 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-03 05:02:54.558272 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-03 05:02:54.561920 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-03 05:02:54.572416 | orchestrator | /opt/configuration ~ 2026-02-03 05:02:54.572481 | orchestrator | + set -e 2026-02-03 05:02:54.572496 | orchestrator | + pushd /opt/configuration 2026-02-03 05:02:54.572508 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-03 05:02:54.572522 | orchestrator | + source /opt/venv/bin/activate 2026-02-03 05:02:54.574658 | orchestrator | ++ deactivate nondestructive 2026-02-03 05:02:54.574711 | orchestrator | ++ '[' -n '' ']' 2026-02-03 05:02:54.574723 | orchestrator | ++ '[' -n '' ']' 2026-02-03 05:02:54.574734 | orchestrator | ++ hash -r 2026-02-03 05:02:54.574745 | orchestrator | ++ '[' -n '' ']' 2026-02-03 05:02:54.574756 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-03 05:02:54.574792 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-03 05:02:54.574803 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-03 05:02:54.574815 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-03 05:02:54.574826 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-03 05:02:54.574837 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-03 05:02:54.574848 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-03 05:02:54.574860 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 05:02:54.574872 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 05:02:54.574882 | orchestrator | ++ export PATH 2026-02-03 05:02:54.574893 | orchestrator | ++ '[' -n '' ']' 2026-02-03 05:02:54.574904 | orchestrator | ++ '[' -z '' ']' 2026-02-03 05:02:54.574915 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-03 05:02:54.574926 | orchestrator | ++ PS1='(venv) ' 2026-02-03 05:02:54.574937 | orchestrator | ++ export PS1 2026-02-03 05:02:54.574948 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-03 05:02:54.574958 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-03 05:02:54.574969 | orchestrator | ++ hash -r 2026-02-03 05:02:54.575063 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-03 05:02:55.926503 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-03 05:02:55.927571 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-03 05:02:55.929368 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-03 05:02:55.930668 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-03 05:02:55.931821 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-03 05:02:55.942810 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-03 05:02:55.944750 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-03 05:02:55.945927 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-03 05:02:55.947442 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-03 05:02:55.987679 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-03 05:02:55.989564 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-03 05:02:55.991575 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-03 05:02:55.993283 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-03 05:02:55.998132 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-03 05:02:56.284632 | orchestrator | ++ which gilt 2026-02-03 05:02:56.287507 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-03 05:02:56.287579 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-03 05:02:56.549725 | orchestrator | osism.cfg-generics: 2026-02-03 05:02:56.665890 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-03 05:02:56.667348 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-03 05:02:56.668915 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-03 05:02:56.668943 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-03 05:02:57.700288 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-03 05:02:57.710561 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-03 05:02:58.083323 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-03 05:02:58.148049 | orchestrator | ~ 2026-02-03 05:02:58.148145 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-03 05:02:58.148154 | orchestrator | + deactivate 2026-02-03 05:02:58.148159 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-03 05:02:58.148165 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 05:02:58.148169 | orchestrator | + export PATH 2026-02-03 05:02:58.148174 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-03 05:02:58.148178 | orchestrator | + '[' -n '' ']' 2026-02-03 05:02:58.148182 | orchestrator | + hash -r 2026-02-03 05:02:58.148186 | orchestrator | + '[' -n '' ']' 2026-02-03 05:02:58.148190 | orchestrator | + unset VIRTUAL_ENV 2026-02-03 05:02:58.148194 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-03 05:02:58.148197 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-03 05:02:58.148202 | orchestrator | + unset -f deactivate 2026-02-03 05:02:58.148206 | orchestrator | + popd 2026-02-03 05:02:58.149807 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-03 05:02:58.149913 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-03 05:02:58.154992 | orchestrator | + set -e 2026-02-03 05:02:58.155052 | orchestrator | + NAMESPACE=kolla/release 2026-02-03 05:02:58.155069 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-03 05:02:58.160502 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-03 05:02:58.166512 | orchestrator | + set -e 2026-02-03 05:02:58.167179 | orchestrator | /opt/configuration ~ 2026-02-03 05:02:58.167201 | orchestrator | + pushd /opt/configuration 2026-02-03 05:02:58.167207 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-03 05:02:58.167213 | orchestrator | + source /opt/venv/bin/activate 2026-02-03 05:02:58.167218 | orchestrator | ++ deactivate nondestructive 2026-02-03 05:02:58.167222 | orchestrator | ++ '[' -n '' ']' 2026-02-03 05:02:58.167227 | orchestrator | ++ '[' -n '' ']' 2026-02-03 05:02:58.167232 | orchestrator | ++ hash -r 2026-02-03 05:02:58.167237 | orchestrator | ++ '[' -n '' ']' 2026-02-03 05:02:58.167241 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-03 05:02:58.167246 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-03 05:02:58.167251 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-03 05:02:58.167328 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-03 05:02:58.167351 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-03 05:02:58.167355 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-03 05:02:58.167364 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-03 05:02:58.167408 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 05:02:58.167430 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 05:02:58.167436 | orchestrator | ++ export PATH 2026-02-03 05:02:58.167465 | orchestrator | ++ '[' -n '' ']' 2026-02-03 05:02:58.167496 | orchestrator | ++ '[' -z '' ']' 2026-02-03 05:02:58.167546 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-03 05:02:58.167552 | orchestrator | ++ PS1='(venv) ' 2026-02-03 05:02:58.167556 | orchestrator | ++ export PS1 2026-02-03 05:02:58.167606 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-03 05:02:58.167612 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-03 05:02:58.167616 | orchestrator | ++ hash -r 2026-02-03 05:02:58.167801 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-03 05:02:58.750417 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-03 05:02:58.751847 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-03 05:02:58.752926 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-03 05:02:58.754566 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-03 05:02:58.755590 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-03 05:02:58.766997 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-03 05:02:58.768510 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-03 05:02:58.769599 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-03 05:02:58.771115 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-03 05:02:58.808046 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-03 05:02:58.810500 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-03 05:02:58.813397 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-03 05:02:58.814947 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-03 05:02:58.822212 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-03 05:02:59.059258 | orchestrator | ++ which gilt 2026-02-03 05:02:59.062259 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-03 05:02:59.062317 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-03 05:02:59.251882 | orchestrator | osism.cfg-generics: 2026-02-03 05:02:59.367232 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-03 05:02:59.367717 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-03 05:02:59.368341 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-03 05:02:59.368389 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-03 05:03:00.234266 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-03 05:03:00.250459 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-03 05:03:00.641861 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-03 05:03:00.702737 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-03 05:03:00.702901 | orchestrator | + deactivate 2026-02-03 05:03:00.702970 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-03 05:03:00.702988 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-03 05:03:00.703000 | orchestrator | + export PATH 2026-02-03 05:03:00.703181 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-03 05:03:00.703200 | orchestrator | + '[' -n '' ']' 2026-02-03 05:03:00.703211 | orchestrator | + hash -r 2026-02-03 05:03:00.703222 | orchestrator | + '[' -n '' ']' 2026-02-03 05:03:00.703234 | orchestrator | + unset VIRTUAL_ENV 2026-02-03 05:03:00.703246 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-03 05:03:00.703257 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-03 05:03:00.703269 | orchestrator | + unset -f deactivate 2026-02-03 05:03:00.703328 | orchestrator | ~ 2026-02-03 05:03:00.703358 | orchestrator | + popd 2026-02-03 05:03:00.705648 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-03 05:03:00.753032 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-03 05:03:00.753576 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-03 05:03:00.880104 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 05:03:00.880199 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-03 05:03:00.887673 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-03 05:03:00.899019 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-03 05:03:00.976666 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-03 05:03:00.977571 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-03 05:03:01.082713 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-03 05:03:01.082838 | orchestrator | ++ echo true 2026-02-03 05:03:01.083189 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-03 05:03:01.084994 | orchestrator | +++ semver 2024.2 2024.2 2026-02-03 05:03:01.165107 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-03 05:03:01.165563 | orchestrator | +++ semver 2024.2 2025.1 2026-02-03 05:03:01.225837 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-03 05:03:01.225954 | orchestrator | ++ echo false 2026-02-03 05:03:01.225976 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-03 05:03:01.226134 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-03 05:03:01.226158 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-03 05:03:01.226174 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-03 05:03:01.226193 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-03 05:03:01.230888 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-03 05:03:01.230915 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-03 05:03:01.248327 | orchestrator | export RABBITMQ3TO4=true 2026-02-03 05:03:01.250452 | orchestrator | + osism update manager 2026-02-03 05:03:07.424892 | orchestrator | Collecting uv 2026-02-03 05:03:07.532594 | orchestrator | Downloading uv-0.9.28-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-03 05:03:07.556432 | orchestrator | Downloading uv-0.9.28-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.7 MB) 2026-02-03 05:03:08.445430 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.7/22.7 MB 32.4 MB/s eta 0:00:00 2026-02-03 05:03:08.500568 | orchestrator | Installing collected packages: uv 2026-02-03 05:03:08.961157 | orchestrator | Successfully installed uv-0.9.28 2026-02-03 05:03:09.715639 | orchestrator | Resolved 11 packages in 354ms 2026-02-03 05:03:09.757138 | orchestrator | Downloading cryptography (4.2MiB) 2026-02-03 05:03:09.758212 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-03 05:03:09.758455 | orchestrator | Downloading ansible (54.5MiB) 2026-02-03 05:03:09.758911 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-03 05:03:10.066551 | orchestrator | Downloaded netaddr 2026-02-03 05:03:10.129158 | orchestrator | Downloaded cryptography 2026-02-03 05:03:10.283226 | orchestrator | Downloaded ansible-core 2026-02-03 05:03:16.685044 | orchestrator | Downloaded ansible 2026-02-03 05:03:16.685141 | orchestrator | Prepared 11 packages in 6.96s 2026-02-03 05:03:17.235176 | orchestrator | Installed 11 packages in 548ms 2026-02-03 05:03:17.235271 | orchestrator | + ansible==11.11.0 2026-02-03 05:03:17.235286 | orchestrator | + ansible-core==2.18.13 2026-02-03 05:03:17.235298 | orchestrator | + cffi==2.0.0 2026-02-03 05:03:17.235310 | orchestrator | + cryptography==46.0.4 2026-02-03 05:03:17.235323 | orchestrator | + jinja2==3.1.6 2026-02-03 05:03:17.235334 | orchestrator | + markupsafe==3.0.3 2026-02-03 05:03:17.235345 | orchestrator | + netaddr==1.3.0 2026-02-03 05:03:17.235356 | orchestrator | + packaging==26.0 2026-02-03 05:03:17.235366 | orchestrator | + pycparser==3.0 2026-02-03 05:03:17.235377 | orchestrator | + pyyaml==6.0.3 2026-02-03 05:03:17.235389 | orchestrator | + resolvelib==1.0.1 2026-02-03 05:03:18.482898 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-203991ysbs18eh/tmpysh9_x4r/ansible-collection-services7qg84ohr'... 2026-02-03 05:03:19.837190 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-03 05:03:19.837319 | orchestrator | Already on 'main' 2026-02-03 05:03:20.444933 | orchestrator | Starting galaxy collection install process 2026-02-03 05:03:20.445032 | orchestrator | Process install dependency map 2026-02-03 05:03:20.445050 | orchestrator | Starting collection install process 2026-02-03 05:03:20.445063 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-03 05:03:20.445076 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-03 05:03:20.445088 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-03 05:03:20.970151 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-204091h0mrmx1c/tmpklgpe0iy/ansible-playbooks-managerfuo_3pk3'... 2026-02-03 05:03:21.640513 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-03 05:03:21.640639 | orchestrator | Already on 'main' 2026-02-03 05:03:21.954288 | orchestrator | Starting galaxy collection install process 2026-02-03 05:03:21.954390 | orchestrator | Process install dependency map 2026-02-03 05:03:21.954406 | orchestrator | Starting collection install process 2026-02-03 05:03:21.954419 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-03 05:03:21.954433 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-03 05:03:21.954445 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-03 05:03:22.655734 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-03 05:03:22.655853 | orchestrator | -vvvv to see details 2026-02-03 05:03:23.137333 | orchestrator | 2026-02-03 05:03:23.137468 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-03 05:03:23.137488 | orchestrator | 2026-02-03 05:03:23.137501 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-03 05:03:27.548227 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:27.548334 | orchestrator | 2026-02-03 05:03:27.548352 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-03 05:03:27.623597 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 05:03:27.623694 | orchestrator | 2026-02-03 05:03:27.623732 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-03 05:03:29.641719 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:29.641855 | orchestrator | 2026-02-03 05:03:29.641871 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-03 05:03:29.706134 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:29.706210 | orchestrator | 2026-02-03 05:03:29.706221 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-03 05:03:29.783539 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-03 05:03:29.783619 | orchestrator | 2026-02-03 05:03:29.783629 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-03 05:03:34.478927 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-03 05:03:34.479036 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-03 05:03:34.479052 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-03 05:03:34.479076 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-03 05:03:34.479087 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-03 05:03:34.479098 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-03 05:03:34.479109 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-03 05:03:34.479120 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-03 05:03:34.479132 | orchestrator | 2026-02-03 05:03:34.479143 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-03 05:03:35.785584 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:35.785685 | orchestrator | 2026-02-03 05:03:35.785701 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-03 05:03:36.785255 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:36.785378 | orchestrator | 2026-02-03 05:03:36.785405 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-03 05:03:36.872522 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-03 05:03:36.872622 | orchestrator | 2026-02-03 05:03:36.872639 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-03 05:03:38.825077 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-03 05:03:38.825184 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-03 05:03:38.825200 | orchestrator | 2026-02-03 05:03:38.825214 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-03 05:03:39.831316 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:39.831394 | orchestrator | 2026-02-03 05:03:39.831401 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-03 05:03:39.909748 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:03:39.909885 | orchestrator | 2026-02-03 05:03:39.909902 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-03 05:03:39.989751 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-03 05:03:39.989880 | orchestrator | 2026-02-03 05:03:39.989901 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-03 05:03:40.964598 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:40.964701 | orchestrator | 2026-02-03 05:03:40.964718 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-03 05:03:41.056416 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-03 05:03:41.056500 | orchestrator | 2026-02-03 05:03:41.056511 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-03 05:03:43.159605 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-03 05:03:43.159706 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-03 05:03:43.159722 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:43.159735 | orchestrator | 2026-02-03 05:03:43.159748 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-03 05:03:45.129441 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:45.129561 | orchestrator | 2026-02-03 05:03:45.129580 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-03 05:03:45.180799 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:03:45.180904 | orchestrator | 2026-02-03 05:03:45.180916 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-03 05:03:45.289655 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-03 05:03:45.289750 | orchestrator | 2026-02-03 05:03:45.289769 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-03 05:03:46.010580 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:46.010691 | orchestrator | 2026-02-03 05:03:46.010708 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-03 05:03:46.566671 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:46.566774 | orchestrator | 2026-02-03 05:03:46.566790 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-03 05:03:48.446559 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-03 05:03:48.446693 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-03 05:03:48.446720 | orchestrator | 2026-02-03 05:03:48.446741 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-03 05:03:49.648916 | orchestrator | changed: [testbed-manager] 2026-02-03 05:03:49.649024 | orchestrator | 2026-02-03 05:03:49.649040 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-03 05:03:50.275249 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:50.275353 | orchestrator | 2026-02-03 05:03:50.275371 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-03 05:03:50.818166 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:50.818268 | orchestrator | 2026-02-03 05:03:50.818310 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-03 05:03:50.883215 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:03:50.883309 | orchestrator | 2026-02-03 05:03:50.883323 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-03 05:03:50.959474 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-03 05:03:50.959546 | orchestrator | 2026-02-03 05:03:50.959553 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-03 05:03:51.027813 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:51.028001 | orchestrator | 2026-02-03 05:03:51.028027 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-03 05:03:54.022935 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-03 05:03:54.023027 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-03 05:03:54.023040 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-03 05:03:54.023050 | orchestrator | 2026-02-03 05:03:54.023061 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-03 05:03:55.060921 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:55.061018 | orchestrator | 2026-02-03 05:03:55.061032 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-03 05:03:56.207816 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:56.208052 | orchestrator | 2026-02-03 05:03:56.208081 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-03 05:03:57.279659 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:57.279760 | orchestrator | 2026-02-03 05:03:57.279777 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-03 05:03:57.351995 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-03 05:03:57.352093 | orchestrator | 2026-02-03 05:03:57.352108 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-03 05:03:57.428433 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:57.428532 | orchestrator | 2026-02-03 05:03:57.428547 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-03 05:03:58.578611 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-03 05:03:58.578693 | orchestrator | 2026-02-03 05:03:58.578704 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-03 05:03:58.675984 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-03 05:03:58.676063 | orchestrator | 2026-02-03 05:03:58.676074 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-03 05:03:59.822974 | orchestrator | ok: [testbed-manager] 2026-02-03 05:03:59.823072 | orchestrator | 2026-02-03 05:03:59.823082 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-03 05:04:01.078795 | orchestrator | ok: [testbed-manager] 2026-02-03 05:04:01.078960 | orchestrator | 2026-02-03 05:04:01.078979 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-03 05:04:01.149183 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:04:01.149313 | orchestrator | 2026-02-03 05:04:01.149338 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-03 05:04:01.215630 | orchestrator | ok: [testbed-manager] 2026-02-03 05:04:01.215697 | orchestrator | 2026-02-03 05:04:01.215706 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-03 05:04:02.674013 | orchestrator | changed: [testbed-manager] 2026-02-03 05:04:02.674221 | orchestrator | 2026-02-03 05:04:02.674243 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-03 05:05:12.759317 | orchestrator | changed: [testbed-manager] 2026-02-03 05:05:12.759410 | orchestrator | 2026-02-03 05:05:12.759423 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-03 05:05:14.078576 | orchestrator | ok: [testbed-manager] 2026-02-03 05:05:14.078679 | orchestrator | 2026-02-03 05:05:14.078697 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-03 05:05:14.150297 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:05:14.150390 | orchestrator | 2026-02-03 05:05:14.150404 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-03 05:05:15.117265 | orchestrator | ok: [testbed-manager] 2026-02-03 05:05:15.117379 | orchestrator | 2026-02-03 05:05:15.117398 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-03 05:05:15.183770 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:05:15.183886 | orchestrator | 2026-02-03 05:05:15.183903 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-03 05:05:15.183954 | orchestrator | 2026-02-03 05:05:15.183967 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-03 05:05:30.365458 | orchestrator | changed: [testbed-manager] 2026-02-03 05:05:30.365558 | orchestrator | 2026-02-03 05:05:30.365574 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-03 05:06:30.425045 | orchestrator | Pausing for 60 seconds 2026-02-03 05:06:30.425171 | orchestrator | changed: [testbed-manager] 2026-02-03 05:06:30.425196 | orchestrator | 2026-02-03 05:06:30.425217 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-03 05:06:30.478138 | orchestrator | ok: [testbed-manager] 2026-02-03 05:06:30.478223 | orchestrator | 2026-02-03 05:06:30.478235 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-03 05:06:34.731710 | orchestrator | changed: [testbed-manager] 2026-02-03 05:06:34.731814 | orchestrator | 2026-02-03 05:06:34.731830 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-03 05:07:37.654524 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-03 05:07:37.654755 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-03 05:07:37.654782 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-03 05:07:37.654806 | orchestrator | changed: [testbed-manager] 2026-02-03 05:07:37.654828 | orchestrator | 2026-02-03 05:07:37.654849 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-03 05:07:50.922440 | orchestrator | changed: [testbed-manager] 2026-02-03 05:07:50.922587 | orchestrator | 2026-02-03 05:07:50.922605 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-03 05:07:51.010471 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-03 05:07:51.010667 | orchestrator | 2026-02-03 05:07:51.010684 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-03 05:07:51.010697 | orchestrator | 2026-02-03 05:07:51.010708 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-03 05:07:51.066316 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:07:51.066386 | orchestrator | 2026-02-03 05:07:51.066392 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-03 05:07:51.150379 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-03 05:07:51.150472 | orchestrator | 2026-02-03 05:07:51.150551 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-03 05:07:52.476114 | orchestrator | changed: [testbed-manager] 2026-02-03 05:07:52.476193 | orchestrator | 2026-02-03 05:07:52.476202 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-03 05:07:56.551657 | orchestrator | ok: [testbed-manager] 2026-02-03 05:07:56.551749 | orchestrator | 2026-02-03 05:07:56.551764 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-03 05:07:56.652488 | orchestrator | ok: [testbed-manager] => { 2026-02-03 05:07:56.652580 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-03 05:07:56.652594 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-03 05:07:56.652606 | orchestrator | "Checking running containers against expected versions...", 2026-02-03 05:07:56.652619 | orchestrator | "", 2026-02-03 05:07:56.652630 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-03 05:07:56.652642 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-03 05:07:56.652653 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.652664 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-03 05:07:56.652675 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.652686 | orchestrator | "", 2026-02-03 05:07:56.652698 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-03 05:07:56.652709 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-03 05:07:56.652720 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.652731 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-03 05:07:56.652742 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.652753 | orchestrator | "", 2026-02-03 05:07:56.652764 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-03 05:07:56.652775 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-03 05:07:56.652785 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.652796 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-03 05:07:56.652807 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.652818 | orchestrator | "", 2026-02-03 05:07:56.652829 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-03 05:07:56.652840 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-03 05:07:56.652851 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.652862 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-03 05:07:56.652872 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.652883 | orchestrator | "", 2026-02-03 05:07:56.652895 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-03 05:07:56.652906 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-03 05:07:56.652916 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.652927 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-03 05:07:56.652938 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.652949 | orchestrator | "", 2026-02-03 05:07:56.652960 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-03 05:07:56.652994 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653005 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653020 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653032 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653045 | orchestrator | "", 2026-02-03 05:07:56.653057 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-03 05:07:56.653071 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-03 05:07:56.653084 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653097 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-03 05:07:56.653108 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653119 | orchestrator | "", 2026-02-03 05:07:56.653130 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-03 05:07:56.653140 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-03 05:07:56.653151 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653183 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-03 05:07:56.653195 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653206 | orchestrator | "", 2026-02-03 05:07:56.653217 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-03 05:07:56.653227 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-03 05:07:56.653238 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653249 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-03 05:07:56.653260 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653271 | orchestrator | "", 2026-02-03 05:07:56.653286 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-03 05:07:56.653297 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-03 05:07:56.653309 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653320 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-03 05:07:56.653330 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653341 | orchestrator | "", 2026-02-03 05:07:56.653352 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-03 05:07:56.653363 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653374 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653385 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653396 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653406 | orchestrator | "", 2026-02-03 05:07:56.653417 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-03 05:07:56.653428 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653474 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653486 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653497 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653507 | orchestrator | "", 2026-02-03 05:07:56.653518 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-03 05:07:56.653529 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653540 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653551 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653562 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653572 | orchestrator | "", 2026-02-03 05:07:56.653583 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-03 05:07:56.653594 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653605 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653616 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653645 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653657 | orchestrator | "", 2026-02-03 05:07:56.653668 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-03 05:07:56.653679 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653698 | orchestrator | " Enabled: true", 2026-02-03 05:07:56.653709 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-03 05:07:56.653720 | orchestrator | " Status: ✅ MATCH", 2026-02-03 05:07:56.653731 | orchestrator | "", 2026-02-03 05:07:56.653741 | orchestrator | "=== Summary ===", 2026-02-03 05:07:56.653752 | orchestrator | "Errors (version mismatches): 0", 2026-02-03 05:07:56.653763 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-03 05:07:56.653774 | orchestrator | "", 2026-02-03 05:07:56.653785 | orchestrator | "✅ All running containers match expected versions!" 2026-02-03 05:07:56.653796 | orchestrator | ] 2026-02-03 05:07:56.653808 | orchestrator | } 2026-02-03 05:07:56.653819 | orchestrator | 2026-02-03 05:07:56.653830 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-03 05:07:56.719891 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:07:56.719988 | orchestrator | 2026-02-03 05:07:56.719998 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:07:56.720006 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-03 05:07:56.720015 | orchestrator | 2026-02-03 05:08:09.545265 | orchestrator | 2026-02-03 05:08:09 | INFO  | Task a57824d4-c5b4-4109-8742-30bce3d4a8de (sync inventory) is running in background. Output coming soon. 2026-02-03 05:08:41.279280 | orchestrator | 2026-02-03 05:08:11 | INFO  | Starting group_vars file reorganization 2026-02-03 05:08:41.279415 | orchestrator | 2026-02-03 05:08:11 | INFO  | Moved 0 file(s) to their respective directories 2026-02-03 05:08:41.279432 | orchestrator | 2026-02-03 05:08:11 | INFO  | Group_vars file reorganization completed 2026-02-03 05:08:41.279466 | orchestrator | 2026-02-03 05:08:14 | INFO  | Starting variable preparation from inventory 2026-02-03 05:08:41.279479 | orchestrator | 2026-02-03 05:08:17 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-03 05:08:41.279491 | orchestrator | 2026-02-03 05:08:17 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-03 05:08:41.279502 | orchestrator | 2026-02-03 05:08:17 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-03 05:08:41.279514 | orchestrator | 2026-02-03 05:08:17 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-03 05:08:41.279562 | orchestrator | 2026-02-03 05:08:17 | INFO  | Variable preparation completed 2026-02-03 05:08:41.279575 | orchestrator | 2026-02-03 05:08:19 | INFO  | Starting inventory overwrite handling 2026-02-03 05:08:41.279586 | orchestrator | 2026-02-03 05:08:19 | INFO  | Handling group overwrites in 99-overwrite 2026-02-03 05:08:41.279598 | orchestrator | 2026-02-03 05:08:19 | INFO  | Removing group frr:children from 60-generic 2026-02-03 05:08:41.279609 | orchestrator | 2026-02-03 05:08:19 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-03 05:08:41.279620 | orchestrator | 2026-02-03 05:08:19 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-03 05:08:41.279631 | orchestrator | 2026-02-03 05:08:19 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-03 05:08:41.279645 | orchestrator | 2026-02-03 05:08:19 | INFO  | Handling group overwrites in 20-roles 2026-02-03 05:08:41.279664 | orchestrator | 2026-02-03 05:08:19 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-03 05:08:41.279684 | orchestrator | 2026-02-03 05:08:19 | INFO  | Removed 5 group(s) in total 2026-02-03 05:08:41.279702 | orchestrator | 2026-02-03 05:08:19 | INFO  | Inventory overwrite handling completed 2026-02-03 05:08:41.279718 | orchestrator | 2026-02-03 05:08:21 | INFO  | Starting merge of inventory files 2026-02-03 05:08:41.279736 | orchestrator | 2026-02-03 05:08:21 | INFO  | Inventory files merged successfully 2026-02-03 05:08:41.279781 | orchestrator | 2026-02-03 05:08:26 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-03 05:08:41.279800 | orchestrator | 2026-02-03 05:08:39 | INFO  | Successfully wrote ClusterShell configuration 2026-02-03 05:08:41.704940 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-03 05:08:41.705041 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-03 05:08:41.705056 | orchestrator | + local max_attempts=60 2026-02-03 05:08:41.705071 | orchestrator | + local name=kolla-ansible 2026-02-03 05:08:41.705090 | orchestrator | + local attempt_num=1 2026-02-03 05:08:41.705585 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-03 05:08:41.749045 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-03 05:08:41.749184 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-03 05:08:41.749205 | orchestrator | + local max_attempts=60 2026-02-03 05:08:41.749224 | orchestrator | + local name=osism-ansible 2026-02-03 05:08:41.749241 | orchestrator | + local attempt_num=1 2026-02-03 05:08:41.749443 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-03 05:08:41.789855 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-03 05:08:41.789970 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-03 05:08:42.009834 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-03 05:08:42.009936 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-03 05:08:42.009952 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-03 05:08:42.009964 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-03 05:08:42.009981 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-03 05:08:42.009992 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-03 05:08:42.010090 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-03 05:08:42.010114 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-03 05:08:42.010183 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 11 seconds ago 2026-02-03 05:08:42.010202 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-03 05:08:42.010218 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-03 05:08:42.010230 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-03 05:08:42.010241 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-03 05:08:42.010278 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-03 05:08:42.010291 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-03 05:08:42.010302 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-03 05:08:42.023361 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-03 05:08:42.023429 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-03 05:08:42.023439 | orchestrator | + osism apply facts 2026-02-03 05:08:54.336269 | orchestrator | 2026-02-03 05:08:54 | INFO  | Task bbb9759d-e642-4883-9c0c-2bf567990950 (facts) was prepared for execution. 2026-02-03 05:08:54.336381 | orchestrator | 2026-02-03 05:08:54 | INFO  | It takes a moment until task bbb9759d-e642-4883-9c0c-2bf567990950 (facts) has been started and output is visible here. 2026-02-03 05:09:14.635394 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-03 05:09:14.635505 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-03 05:09:14.635534 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-03 05:09:14.635545 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-03 05:09:14.635568 | orchestrator | 2026-02-03 05:09:14.635579 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-03 05:09:14.635591 | orchestrator | 2026-02-03 05:09:14.635602 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-03 05:09:14.635613 | orchestrator | Tuesday 03 February 2026 05:09:01 +0000 (0:00:02.207) 0:00:02.207 ****** 2026-02-03 05:09:14.635625 | orchestrator | ok: [testbed-manager] 2026-02-03 05:09:14.635637 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:09:14.635648 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:09:14.635659 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:09:14.635670 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:09:14.635681 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:09:14.635691 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:09:14.635702 | orchestrator | 2026-02-03 05:09:14.635713 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-03 05:09:14.635725 | orchestrator | Tuesday 03 February 2026 05:09:03 +0000 (0:00:02.349) 0:00:04.557 ****** 2026-02-03 05:09:14.635736 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:09:14.635747 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:09:14.635777 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:09:14.635789 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:09:14.635804 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:09:14.635815 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:09:14.635826 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:09:14.635837 | orchestrator | 2026-02-03 05:09:14.635848 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-03 05:09:14.635859 | orchestrator | 2026-02-03 05:09:14.635870 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-03 05:09:14.635881 | orchestrator | Tuesday 03 February 2026 05:09:05 +0000 (0:00:01.978) 0:00:06.535 ****** 2026-02-03 05:09:14.635893 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:09:14.635903 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:09:14.635970 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:09:14.635984 | orchestrator | ok: [testbed-manager] 2026-02-03 05:09:14.636021 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:09:14.636033 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:09:14.636043 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:09:14.636054 | orchestrator | 2026-02-03 05:09:14.636065 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-03 05:09:14.636076 | orchestrator | 2026-02-03 05:09:14.636087 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-03 05:09:14.636098 | orchestrator | Tuesday 03 February 2026 05:09:12 +0000 (0:00:06.523) 0:00:13.058 ****** 2026-02-03 05:09:14.636109 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:09:14.636120 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:09:14.636130 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:09:14.636141 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:09:14.636152 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:09:14.636162 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:09:14.636173 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:09:14.636184 | orchestrator | 2026-02-03 05:09:14.636195 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:09:14.636206 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 05:09:14.636219 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 05:09:14.636230 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 05:09:14.636240 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 05:09:14.636251 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 05:09:14.636262 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 05:09:14.636273 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 05:09:14.636285 | orchestrator | 2026-02-03 05:09:14.636296 | orchestrator | 2026-02-03 05:09:14.636307 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:09:14.636318 | orchestrator | Tuesday 03 February 2026 05:09:14 +0000 (0:00:01.830) 0:00:14.889 ****** 2026-02-03 05:09:14.636329 | orchestrator | =============================================================================== 2026-02-03 05:09:14.636339 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.52s 2026-02-03 05:09:14.636350 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.35s 2026-02-03 05:09:14.636361 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.98s 2026-02-03 05:09:14.636372 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.83s 2026-02-03 05:09:15.054399 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-03 05:09:15.151744 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 05:09:15.151957 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-03 05:09:15.187659 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-03 05:09:15.187761 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-03 05:09:15.194123 | orchestrator | + set -e 2026-02-03 05:09:15.194493 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-03 05:09:15.194519 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-03 05:09:15.199201 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-03 05:09:15.206149 | orchestrator | 2026-02-03 05:09:15.206204 | orchestrator | # UPGRADE SERVICES 2026-02-03 05:09:15.206244 | orchestrator | 2026-02-03 05:09:15.206284 | orchestrator | + set -e 2026-02-03 05:09:15.206298 | orchestrator | + echo 2026-02-03 05:09:15.206309 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-03 05:09:15.206321 | orchestrator | + echo 2026-02-03 05:09:15.206332 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 05:09:15.207172 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 05:09:15.207196 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 05:09:15.207207 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 05:09:15.207217 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 05:09:15.207357 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 05:09:15.207371 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 05:09:15.207382 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 05:09:15.207393 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 05:09:15.207405 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 05:09:15.207416 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 05:09:15.207427 | orchestrator | ++ export ARA=false 2026-02-03 05:09:15.207438 | orchestrator | ++ ARA=false 2026-02-03 05:09:15.207449 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 05:09:15.207465 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 05:09:15.207483 | orchestrator | ++ export TEMPEST=false 2026-02-03 05:09:15.207502 | orchestrator | ++ TEMPEST=false 2026-02-03 05:09:15.207518 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 05:09:15.207536 | orchestrator | ++ IS_ZUUL=true 2026-02-03 05:09:15.207553 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 05:09:15.207573 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 05:09:15.207589 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 05:09:15.207604 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 05:09:15.207622 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 05:09:15.207639 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 05:09:15.207655 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 05:09:15.207673 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 05:09:15.207693 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 05:09:15.207712 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 05:09:15.207731 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-03 05:09:15.207751 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-03 05:09:15.207792 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-03 05:09:15.207805 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-03 05:09:15.207816 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-03 05:09:15.218500 | orchestrator | + set -e 2026-02-03 05:09:15.218563 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 05:09:15.219209 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 05:09:15.219230 | orchestrator | ++ INTERACTIVE=false 2026-02-03 05:09:15.219240 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 05:09:15.219252 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 05:09:15.219468 | orchestrator | 2026-02-03 05:09:15.219484 | orchestrator | # PULL IMAGES 2026-02-03 05:09:15.219493 | orchestrator | 2026-02-03 05:09:15.219519 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 05:09:15.219529 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 05:09:15.219539 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 05:09:15.219548 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 05:09:15.219557 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 05:09:15.219566 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 05:09:15.219575 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 05:09:15.219584 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 05:09:15.219594 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 05:09:15.219603 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 05:09:15.219612 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 05:09:15.219621 | orchestrator | ++ export ARA=false 2026-02-03 05:09:15.219630 | orchestrator | ++ ARA=false 2026-02-03 05:09:15.219639 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 05:09:15.219648 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 05:09:15.219657 | orchestrator | ++ export TEMPEST=false 2026-02-03 05:09:15.219666 | orchestrator | ++ TEMPEST=false 2026-02-03 05:09:15.219675 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 05:09:15.219684 | orchestrator | ++ IS_ZUUL=true 2026-02-03 05:09:15.219693 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 05:09:15.219702 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 05:09:15.219711 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 05:09:15.219720 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 05:09:15.219729 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 05:09:15.219737 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 05:09:15.219746 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 05:09:15.219755 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 05:09:15.219784 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 05:09:15.219793 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 05:09:15.219802 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-03 05:09:15.219811 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-03 05:09:15.219820 | orchestrator | + echo 2026-02-03 05:09:15.219829 | orchestrator | + echo '# PULL IMAGES' 2026-02-03 05:09:15.219838 | orchestrator | + echo 2026-02-03 05:09:15.220273 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-03 05:09:15.285849 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 05:09:15.285976 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-03 05:09:17.516053 | orchestrator | 2026-02-03 05:09:17 | INFO  | Trying to run play pull-images in environment custom 2026-02-03 05:09:27.610963 | orchestrator | 2026-02-03 05:09:27 | INFO  | Task d5000efe-fc57-4eac-bc04-c08313dfe6e5 (pull-images) was prepared for execution. 2026-02-03 05:09:27.611043 | orchestrator | 2026-02-03 05:09:27 | INFO  | Task d5000efe-fc57-4eac-bc04-c08313dfe6e5 is running in background. No more output. Check ARA for logs. 2026-02-03 05:09:27.985150 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-03 05:09:27.995565 | orchestrator | + set -e 2026-02-03 05:09:27.995965 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 05:09:27.995997 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 05:09:27.996009 | orchestrator | ++ INTERACTIVE=false 2026-02-03 05:09:27.996019 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 05:09:27.996030 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 05:09:27.996041 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-03 05:09:27.998111 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-03 05:09:28.013739 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-03 05:09:28.013803 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-03 05:09:28.013817 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-03 05:09:28.081706 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-03 05:09:28.081814 | orchestrator | + osism apply frr 2026-02-03 05:09:40.500479 | orchestrator | 2026-02-03 05:09:40 | INFO  | Task 2a5d6d93-5b6d-4568-bd73-f69af11e3856 (frr) was prepared for execution. 2026-02-03 05:09:40.500589 | orchestrator | 2026-02-03 05:09:40 | INFO  | It takes a moment until task 2a5d6d93-5b6d-4568-bd73-f69af11e3856 (frr) has been started and output is visible here. 2026-02-03 05:10:16.167991 | orchestrator | 2026-02-03 05:10:16.168144 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-03 05:10:16.168162 | orchestrator | 2026-02-03 05:10:16.168175 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-03 05:10:16.168187 | orchestrator | Tuesday 03 February 2026 05:09:48 +0000 (0:00:02.957) 0:00:02.957 ****** 2026-02-03 05:10:16.168199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 05:10:16.168212 | orchestrator | 2026-02-03 05:10:16.168223 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-03 05:10:16.168234 | orchestrator | Tuesday 03 February 2026 05:09:52 +0000 (0:00:03.916) 0:00:06.873 ****** 2026-02-03 05:10:16.168245 | orchestrator | ok: [testbed-manager] 2026-02-03 05:10:16.168258 | orchestrator | 2026-02-03 05:10:16.168269 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-03 05:10:16.168280 | orchestrator | Tuesday 03 February 2026 05:09:54 +0000 (0:00:02.445) 0:00:09.319 ****** 2026-02-03 05:10:16.168292 | orchestrator | ok: [testbed-manager] 2026-02-03 05:10:16.168303 | orchestrator | 2026-02-03 05:10:16.168314 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-03 05:10:16.168325 | orchestrator | Tuesday 03 February 2026 05:09:57 +0000 (0:00:03.315) 0:00:12.635 ****** 2026-02-03 05:10:16.168336 | orchestrator | ok: [testbed-manager] 2026-02-03 05:10:16.168347 | orchestrator | 2026-02-03 05:10:16.168358 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-03 05:10:16.168368 | orchestrator | Tuesday 03 February 2026 05:10:00 +0000 (0:00:02.102) 0:00:14.737 ****** 2026-02-03 05:10:16.168410 | orchestrator | ok: [testbed-manager] 2026-02-03 05:10:16.168422 | orchestrator | 2026-02-03 05:10:16.168433 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-03 05:10:16.168444 | orchestrator | Tuesday 03 February 2026 05:10:02 +0000 (0:00:02.002) 0:00:16.740 ****** 2026-02-03 05:10:16.168455 | orchestrator | ok: [testbed-manager] 2026-02-03 05:10:16.168466 | orchestrator | 2026-02-03 05:10:16.168477 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-03 05:10:16.168489 | orchestrator | Tuesday 03 February 2026 05:10:04 +0000 (0:00:02.672) 0:00:19.413 ****** 2026-02-03 05:10:16.168499 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:10:16.168514 | orchestrator | 2026-02-03 05:10:16.168527 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-03 05:10:16.168540 | orchestrator | Tuesday 03 February 2026 05:10:05 +0000 (0:00:01.171) 0:00:20.585 ****** 2026-02-03 05:10:16.168582 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:10:16.168595 | orchestrator | 2026-02-03 05:10:16.168608 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-03 05:10:16.168620 | orchestrator | Tuesday 03 February 2026 05:10:07 +0000 (0:00:01.231) 0:00:21.816 ****** 2026-02-03 05:10:16.168633 | orchestrator | ok: [testbed-manager] 2026-02-03 05:10:16.168645 | orchestrator | 2026-02-03 05:10:16.168658 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-03 05:10:16.168670 | orchestrator | Tuesday 03 February 2026 05:10:09 +0000 (0:00:02.048) 0:00:23.865 ****** 2026-02-03 05:10:16.168682 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-03 05:10:16.168719 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-03 05:10:16.168734 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-03 05:10:16.168748 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-03 05:10:16.168761 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-03 05:10:16.168773 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-03 05:10:16.168787 | orchestrator | 2026-02-03 05:10:16.168799 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-03 05:10:16.168813 | orchestrator | Tuesday 03 February 2026 05:10:13 +0000 (0:00:03.819) 0:00:27.684 ****** 2026-02-03 05:10:16.168826 | orchestrator | ok: [testbed-manager] 2026-02-03 05:10:16.168838 | orchestrator | 2026-02-03 05:10:16.168851 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:10:16.168864 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 05:10:16.168876 | orchestrator | 2026-02-03 05:10:16.168888 | orchestrator | 2026-02-03 05:10:16.168899 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:10:16.168910 | orchestrator | Tuesday 03 February 2026 05:10:15 +0000 (0:00:02.728) 0:00:30.413 ****** 2026-02-03 05:10:16.168920 | orchestrator | =============================================================================== 2026-02-03 05:10:16.168931 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 3.92s 2026-02-03 05:10:16.168942 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.82s 2026-02-03 05:10:16.168952 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.32s 2026-02-03 05:10:16.168963 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.73s 2026-02-03 05:10:16.168973 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.67s 2026-02-03 05:10:16.168984 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.45s 2026-02-03 05:10:16.168995 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 2.10s 2026-02-03 05:10:16.169014 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.05s 2026-02-03 05:10:16.169046 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 2.00s 2026-02-03 05:10:16.169057 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.23s 2026-02-03 05:10:16.169068 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.17s 2026-02-03 05:10:16.565308 | orchestrator | + osism apply kubernetes 2026-02-03 05:10:18.929440 | orchestrator | 2026-02-03 05:10:18 | INFO  | Task 3df508c8-df23-43a9-ad46-aaa3c7e82e0a (kubernetes) was prepared for execution. 2026-02-03 05:10:18.929599 | orchestrator | 2026-02-03 05:10:18 | INFO  | It takes a moment until task 3df508c8-df23-43a9-ad46-aaa3c7e82e0a (kubernetes) has been started and output is visible here. 2026-02-03 05:11:05.928471 | orchestrator | 2026-02-03 05:11:05.928716 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-03 05:11:05.928741 | orchestrator | 2026-02-03 05:11:05.928753 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-03 05:11:05.928767 | orchestrator | Tuesday 03 February 2026 05:10:26 +0000 (0:00:02.284) 0:00:02.284 ****** 2026-02-03 05:11:05.928779 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:11:05.928791 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:11:05.928802 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:11:05.928813 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:11:05.928824 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:11:05.928835 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:11:05.928846 | orchestrator | 2026-02-03 05:11:05.928856 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-03 05:11:05.928867 | orchestrator | Tuesday 03 February 2026 05:10:31 +0000 (0:00:05.012) 0:00:07.296 ****** 2026-02-03 05:11:05.928878 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.928890 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.928901 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.928912 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.928925 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.928939 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.928951 | orchestrator | 2026-02-03 05:11:05.928964 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-03 05:11:05.928978 | orchestrator | Tuesday 03 February 2026 05:10:33 +0000 (0:00:01.998) 0:00:09.294 ****** 2026-02-03 05:11:05.928991 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.929005 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.929019 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.929032 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.929044 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.929056 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.929069 | orchestrator | 2026-02-03 05:11:05.929082 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-03 05:11:05.929095 | orchestrator | Tuesday 03 February 2026 05:10:35 +0000 (0:00:02.119) 0:00:11.414 ****** 2026-02-03 05:11:05.929107 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:11:05.929121 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:11:05.929133 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:11:05.929146 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:11:05.929159 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:11:05.929171 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:11:05.929183 | orchestrator | 2026-02-03 05:11:05.929196 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-03 05:11:05.929209 | orchestrator | Tuesday 03 February 2026 05:10:37 +0000 (0:00:02.645) 0:00:14.060 ****** 2026-02-03 05:11:05.929222 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:11:05.929234 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:11:05.929247 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:11:05.929260 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:11:05.929324 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:11:05.929338 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:11:05.929349 | orchestrator | 2026-02-03 05:11:05.929360 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-03 05:11:05.929371 | orchestrator | Tuesday 03 February 2026 05:10:40 +0000 (0:00:02.667) 0:00:16.727 ****** 2026-02-03 05:11:05.929382 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:11:05.929393 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:11:05.929404 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:11:05.929415 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:11:05.929426 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:11:05.929437 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:11:05.929448 | orchestrator | 2026-02-03 05:11:05.929459 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-03 05:11:05.929469 | orchestrator | Tuesday 03 February 2026 05:10:42 +0000 (0:00:02.296) 0:00:19.023 ****** 2026-02-03 05:11:05.929481 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.929492 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.929503 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.929513 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.929524 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.929535 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.929546 | orchestrator | 2026-02-03 05:11:05.929557 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-03 05:11:05.929568 | orchestrator | Tuesday 03 February 2026 05:10:45 +0000 (0:00:02.216) 0:00:21.239 ****** 2026-02-03 05:11:05.929579 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.929589 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.929600 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.929611 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.929635 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.929646 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.929657 | orchestrator | 2026-02-03 05:11:05.929668 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-03 05:11:05.929679 | orchestrator | Tuesday 03 February 2026 05:10:46 +0000 (0:00:01.842) 0:00:23.082 ****** 2026-02-03 05:11:05.929690 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 05:11:05.929701 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 05:11:05.929712 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.929723 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 05:11:05.929734 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 05:11:05.929745 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.929756 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 05:11:05.929766 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 05:11:05.929777 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.929788 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 05:11:05.929799 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 05:11:05.929810 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.929840 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 05:11:05.929851 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 05:11:05.929862 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.929873 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-03 05:11:05.929884 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-03 05:11:05.929895 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.929906 | orchestrator | 2026-02-03 05:11:05.929933 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-03 05:11:05.929944 | orchestrator | Tuesday 03 February 2026 05:10:49 +0000 (0:00:02.255) 0:00:25.337 ****** 2026-02-03 05:11:05.929955 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.929966 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.929977 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.929988 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.929999 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.930010 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.930088 | orchestrator | 2026-02-03 05:11:05.930101 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-03 05:11:05.930113 | orchestrator | Tuesday 03 February 2026 05:10:51 +0000 (0:00:02.651) 0:00:27.989 ****** 2026-02-03 05:11:05.930124 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:11:05.930135 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:11:05.930146 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:11:05.930157 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:11:05.930168 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:11:05.930179 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:11:05.930190 | orchestrator | 2026-02-03 05:11:05.930201 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-03 05:11:05.930212 | orchestrator | Tuesday 03 February 2026 05:10:54 +0000 (0:00:02.282) 0:00:30.272 ****** 2026-02-03 05:11:05.930222 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:11:05.930233 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:11:05.930244 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:11:05.930255 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:11:05.930266 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:11:05.930276 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:11:05.930325 | orchestrator | 2026-02-03 05:11:05.930338 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-03 05:11:05.930349 | orchestrator | Tuesday 03 February 2026 05:10:56 +0000 (0:00:02.817) 0:00:33.089 ****** 2026-02-03 05:11:05.930360 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.930371 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.930382 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.930392 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.930403 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.930414 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.930425 | orchestrator | 2026-02-03 05:11:05.930436 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-03 05:11:05.930446 | orchestrator | Tuesday 03 February 2026 05:10:59 +0000 (0:00:02.068) 0:00:35.158 ****** 2026-02-03 05:11:05.930457 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.930468 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.930479 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.930490 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.930501 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.930512 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.930523 | orchestrator | 2026-02-03 05:11:05.930534 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-03 05:11:05.930547 | orchestrator | Tuesday 03 February 2026 05:11:01 +0000 (0:00:02.332) 0:00:37.490 ****** 2026-02-03 05:11:05.930558 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.930572 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.930584 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.930594 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.930605 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.930616 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.930626 | orchestrator | 2026-02-03 05:11:05.930638 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-03 05:11:05.930648 | orchestrator | Tuesday 03 February 2026 05:11:03 +0000 (0:00:01.876) 0:00:39.367 ****** 2026-02-03 05:11:05.930668 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-03 05:11:05.930679 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-03 05:11:05.930690 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.930701 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-03 05:11:05.930712 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-03 05:11:05.930723 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.930734 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-03 05:11:05.930744 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-03 05:11:05.930755 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:11:05.930766 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-03 05:11:05.930777 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-03 05:11:05.930788 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:11:05.930799 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-03 05:11:05.930810 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-03 05:11:05.930820 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:11:05.930831 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-03 05:11:05.930842 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-03 05:11:05.930852 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:11:05.930863 | orchestrator | 2026-02-03 05:11:05.930874 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-03 05:11:05.930885 | orchestrator | Tuesday 03 February 2026 05:11:05 +0000 (0:00:02.181) 0:00:41.548 ****** 2026-02-03 05:11:05.930896 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:11:05.930907 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:11:05.930927 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:13:14.088027 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:13:14.088145 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:13:14.088162 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:13:14.088174 | orchestrator | 2026-02-03 05:13:14.088187 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-03 05:13:14.088201 | orchestrator | Tuesday 03 February 2026 05:11:07 +0000 (0:00:01.806) 0:00:43.355 ****** 2026-02-03 05:13:14.088213 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:13:14.088224 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:13:14.088234 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:13:14.088246 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:13:14.088257 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:13:14.088268 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:13:14.088279 | orchestrator | 2026-02-03 05:13:14.088290 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-03 05:13:14.088301 | orchestrator | 2026-02-03 05:13:14.088313 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-03 05:13:14.088326 | orchestrator | Tuesday 03 February 2026 05:11:10 +0000 (0:00:03.161) 0:00:46.517 ****** 2026-02-03 05:13:14.088337 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.088349 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.088382 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.088393 | orchestrator | 2026-02-03 05:13:14.088409 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-03 05:13:14.088420 | orchestrator | Tuesday 03 February 2026 05:11:12 +0000 (0:00:01.922) 0:00:48.439 ****** 2026-02-03 05:13:14.088431 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.088442 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.088452 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.088463 | orchestrator | 2026-02-03 05:13:14.088474 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-03 05:13:14.088485 | orchestrator | Tuesday 03 February 2026 05:11:14 +0000 (0:00:02.218) 0:00:50.658 ****** 2026-02-03 05:13:14.088520 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:13:14.088532 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:13:14.088543 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:13:14.088554 | orchestrator | 2026-02-03 05:13:14.088567 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-03 05:13:14.088581 | orchestrator | Tuesday 03 February 2026 05:11:16 +0000 (0:00:02.220) 0:00:52.878 ****** 2026-02-03 05:13:14.088594 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.088606 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.088620 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.088633 | orchestrator | 2026-02-03 05:13:14.088646 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-03 05:13:14.088659 | orchestrator | Tuesday 03 February 2026 05:11:18 +0000 (0:00:02.114) 0:00:54.993 ****** 2026-02-03 05:13:14.088672 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:13:14.088685 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:13:14.088698 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:13:14.088738 | orchestrator | 2026-02-03 05:13:14.088759 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-03 05:13:14.088779 | orchestrator | Tuesday 03 February 2026 05:11:20 +0000 (0:00:01.502) 0:00:56.496 ****** 2026-02-03 05:13:14.088800 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.088820 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.088837 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.088851 | orchestrator | 2026-02-03 05:13:14.088865 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-03 05:13:14.088878 | orchestrator | Tuesday 03 February 2026 05:11:22 +0000 (0:00:01.848) 0:00:58.345 ****** 2026-02-03 05:13:14.088891 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.088904 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.088918 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.088931 | orchestrator | 2026-02-03 05:13:14.088948 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-03 05:13:14.088963 | orchestrator | Tuesday 03 February 2026 05:11:24 +0000 (0:00:02.307) 0:01:00.652 ****** 2026-02-03 05:13:14.088974 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:13:14.088985 | orchestrator | 2026-02-03 05:13:14.088995 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-03 05:13:14.089006 | orchestrator | Tuesday 03 February 2026 05:11:26 +0000 (0:00:02.231) 0:01:02.884 ****** 2026-02-03 05:13:14.089017 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.089028 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.089038 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.089049 | orchestrator | 2026-02-03 05:13:14.089060 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-03 05:13:14.089071 | orchestrator | Tuesday 03 February 2026 05:11:29 +0000 (0:00:02.604) 0:01:05.488 ****** 2026-02-03 05:13:14.089082 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:13:14.089093 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.089103 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:13:14.089114 | orchestrator | 2026-02-03 05:13:14.089125 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-03 05:13:14.089136 | orchestrator | Tuesday 03 February 2026 05:11:31 +0000 (0:00:01.790) 0:01:07.279 ****** 2026-02-03 05:13:14.089147 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:13:14.089158 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:13:14.089168 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:13:14.089179 | orchestrator | 2026-02-03 05:13:14.089190 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-03 05:13:14.089201 | orchestrator | Tuesday 03 February 2026 05:11:33 +0000 (0:00:01.860) 0:01:09.139 ****** 2026-02-03 05:13:14.089213 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:13:14.089232 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:13:14.089251 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:13:14.089280 | orchestrator | 2026-02-03 05:13:14.089297 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-03 05:13:14.089314 | orchestrator | Tuesday 03 February 2026 05:11:35 +0000 (0:00:02.524) 0:01:11.664 ****** 2026-02-03 05:13:14.089332 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:13:14.089350 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:13:14.089392 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:13:14.089409 | orchestrator | 2026-02-03 05:13:14.089420 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-03 05:13:14.089431 | orchestrator | Tuesday 03 February 2026 05:11:36 +0000 (0:00:01.407) 0:01:13.072 ****** 2026-02-03 05:13:14.089442 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:13:14.089453 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:13:14.089464 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:13:14.089475 | orchestrator | 2026-02-03 05:13:14.089486 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-03 05:13:14.089496 | orchestrator | Tuesday 03 February 2026 05:11:38 +0000 (0:00:01.649) 0:01:14.722 ****** 2026-02-03 05:13:14.089507 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:13:14.089518 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:13:14.089529 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:13:14.089540 | orchestrator | 2026-02-03 05:13:14.089551 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-03 05:13:14.089561 | orchestrator | Tuesday 03 February 2026 05:11:40 +0000 (0:00:02.383) 0:01:17.105 ****** 2026-02-03 05:13:14.089572 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.089583 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.089594 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.089605 | orchestrator | 2026-02-03 05:13:14.089616 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-03 05:13:14.089627 | orchestrator | Tuesday 03 February 2026 05:11:42 +0000 (0:00:01.976) 0:01:19.082 ****** 2026-02-03 05:13:14.089637 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.089648 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.089659 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.089670 | orchestrator | 2026-02-03 05:13:14.089681 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-03 05:13:14.089692 | orchestrator | Tuesday 03 February 2026 05:11:44 +0000 (0:00:01.489) 0:01:20.571 ****** 2026-02-03 05:13:14.089703 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-03 05:13:14.089739 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-03 05:13:14.089751 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-03 05:13:14.089762 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-03 05:13:14.089773 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-03 05:13:14.089784 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-03 05:13:14.089795 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.089806 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.089817 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.089827 | orchestrator | 2026-02-03 05:13:14.089838 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-03 05:13:14.089849 | orchestrator | Tuesday 03 February 2026 05:12:08 +0000 (0:00:23.613) 0:01:44.184 ****** 2026-02-03 05:13:14.089860 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:13:14.089871 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:13:14.089890 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:13:14.089901 | orchestrator | 2026-02-03 05:13:14.089912 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-03 05:13:14.089923 | orchestrator | Tuesday 03 February 2026 05:12:09 +0000 (0:00:01.453) 0:01:45.637 ****** 2026-02-03 05:13:14.089934 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:13:14.089945 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:13:14.089956 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:13:14.089971 | orchestrator | 2026-02-03 05:13:14.089989 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-03 05:13:14.090007 | orchestrator | Tuesday 03 February 2026 05:12:11 +0000 (0:00:02.254) 0:01:47.892 ****** 2026-02-03 05:13:14.090090 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.090102 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.090113 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.090124 | orchestrator | 2026-02-03 05:13:14.090136 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-03 05:13:14.090147 | orchestrator | Tuesday 03 February 2026 05:12:14 +0000 (0:00:02.395) 0:01:50.288 ****** 2026-02-03 05:13:14.090158 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:13:14.090169 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:13:14.090180 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:13:14.090190 | orchestrator | 2026-02-03 05:13:14.090201 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-03 05:13:14.090212 | orchestrator | Tuesday 03 February 2026 05:13:08 +0000 (0:00:54.112) 0:02:44.400 ****** 2026-02-03 05:13:14.090223 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.090234 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.090245 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.090256 | orchestrator | 2026-02-03 05:13:14.090275 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-03 05:13:14.090287 | orchestrator | Tuesday 03 February 2026 05:13:10 +0000 (0:00:01.849) 0:02:46.250 ****** 2026-02-03 05:13:14.090298 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:13:14.090308 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:13:14.090319 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:13:14.090330 | orchestrator | 2026-02-03 05:13:14.090341 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-03 05:13:14.090352 | orchestrator | Tuesday 03 February 2026 05:13:11 +0000 (0:00:01.845) 0:02:48.095 ****** 2026-02-03 05:13:14.090363 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:13:14.090374 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:13:14.090385 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:13:14.090396 | orchestrator | 2026-02-03 05:13:14.090417 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-03 05:14:05.423167 | orchestrator | Tuesday 03 February 2026 05:13:14 +0000 (0:00:02.082) 0:02:50.179 ****** 2026-02-03 05:14:05.423283 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:14:05.423301 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:14:05.423313 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:14:05.423324 | orchestrator | 2026-02-03 05:14:05.423337 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-03 05:14:05.423348 | orchestrator | Tuesday 03 February 2026 05:13:15 +0000 (0:00:01.842) 0:02:52.021 ****** 2026-02-03 05:14:05.423359 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:14:05.423371 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:14:05.423382 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:14:05.423393 | orchestrator | 2026-02-03 05:14:05.423404 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-03 05:14:05.423415 | orchestrator | Tuesday 03 February 2026 05:13:17 +0000 (0:00:01.663) 0:02:53.685 ****** 2026-02-03 05:14:05.423427 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:14:05.423439 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:14:05.423450 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:14:05.423461 | orchestrator | 2026-02-03 05:14:05.423472 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-03 05:14:05.423508 | orchestrator | Tuesday 03 February 2026 05:13:19 +0000 (0:00:01.848) 0:02:55.534 ****** 2026-02-03 05:14:05.423567 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:14:05.423581 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:14:05.423592 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:14:05.423603 | orchestrator | 2026-02-03 05:14:05.423614 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-03 05:14:05.423625 | orchestrator | Tuesday 03 February 2026 05:13:21 +0000 (0:00:02.236) 0:02:57.770 ****** 2026-02-03 05:14:05.423636 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:14:05.423647 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:14:05.423658 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:14:05.423669 | orchestrator | 2026-02-03 05:14:05.423682 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-03 05:14:05.423695 | orchestrator | Tuesday 03 February 2026 05:13:23 +0000 (0:00:01.983) 0:02:59.753 ****** 2026-02-03 05:14:05.423709 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:14:05.423721 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:14:05.423734 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:14:05.423746 | orchestrator | 2026-02-03 05:14:05.423759 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-03 05:14:05.423772 | orchestrator | Tuesday 03 February 2026 05:13:25 +0000 (0:00:02.158) 0:03:01.912 ****** 2026-02-03 05:14:05.423784 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:14:05.423798 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:14:05.423811 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:14:05.423823 | orchestrator | 2026-02-03 05:14:05.423835 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-03 05:14:05.423848 | orchestrator | Tuesday 03 February 2026 05:13:27 +0000 (0:00:01.493) 0:03:03.405 ****** 2026-02-03 05:14:05.423861 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:14:05.423873 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:14:05.423887 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:14:05.423899 | orchestrator | 2026-02-03 05:14:05.423912 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-03 05:14:05.423925 | orchestrator | Tuesday 03 February 2026 05:13:28 +0000 (0:00:01.404) 0:03:04.810 ****** 2026-02-03 05:14:05.423938 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:14:05.423950 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:14:05.423962 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:14:05.423976 | orchestrator | 2026-02-03 05:14:05.423988 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-03 05:14:05.424002 | orchestrator | Tuesday 03 February 2026 05:13:30 +0000 (0:00:01.857) 0:03:06.667 ****** 2026-02-03 05:14:05.424022 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:14:05.424051 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:14:05.424072 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:14:05.424089 | orchestrator | 2026-02-03 05:14:05.424108 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-03 05:14:05.424127 | orchestrator | Tuesday 03 February 2026 05:13:32 +0000 (0:00:01.736) 0:03:08.404 ****** 2026-02-03 05:14:05.424145 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-03 05:14:05.424162 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-03 05:14:05.424179 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-03 05:14:05.424196 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-03 05:14:05.424213 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-03 05:14:05.424230 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-03 05:14:05.424266 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-03 05:14:05.424283 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-03 05:14:05.424302 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-03 05:14:05.424320 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-03 05:14:05.424340 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-03 05:14:05.424359 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-03 05:14:05.424397 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-03 05:14:05.424415 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-03 05:14:05.424434 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-03 05:14:05.424453 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-03 05:14:05.424470 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-03 05:14:05.424487 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-03 05:14:05.424502 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-03 05:14:05.424555 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-03 05:14:05.424576 | orchestrator | 2026-02-03 05:14:05.424594 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-03 05:14:05.424612 | orchestrator | 2026-02-03 05:14:05.424629 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-03 05:14:05.424646 | orchestrator | Tuesday 03 February 2026 05:13:36 +0000 (0:00:04.571) 0:03:12.976 ****** 2026-02-03 05:14:05.424664 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:14:05.424682 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:14:05.424699 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:14:05.424716 | orchestrator | 2026-02-03 05:14:05.424734 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-03 05:14:05.424750 | orchestrator | Tuesday 03 February 2026 05:13:38 +0000 (0:00:01.408) 0:03:14.385 ****** 2026-02-03 05:14:05.424767 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:14:05.424783 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:14:05.424800 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:14:05.424818 | orchestrator | 2026-02-03 05:14:05.424835 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-03 05:14:05.424853 | orchestrator | Tuesday 03 February 2026 05:13:39 +0000 (0:00:01.716) 0:03:16.102 ****** 2026-02-03 05:14:05.424871 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:14:05.424889 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:14:05.424907 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:14:05.424925 | orchestrator | 2026-02-03 05:14:05.424944 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-03 05:14:05.424962 | orchestrator | Tuesday 03 February 2026 05:13:41 +0000 (0:00:01.743) 0:03:17.845 ****** 2026-02-03 05:14:05.424981 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 05:14:05.424999 | orchestrator | 2026-02-03 05:14:05.425017 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-03 05:14:05.425032 | orchestrator | Tuesday 03 February 2026 05:13:43 +0000 (0:00:02.003) 0:03:19.849 ****** 2026-02-03 05:14:05.425051 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:14:05.425070 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:14:05.425087 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:14:05.425120 | orchestrator | 2026-02-03 05:14:05.425137 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-03 05:14:05.425155 | orchestrator | Tuesday 03 February 2026 05:13:45 +0000 (0:00:01.459) 0:03:21.309 ****** 2026-02-03 05:14:05.425173 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:14:05.425191 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:14:05.425208 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:14:05.425225 | orchestrator | 2026-02-03 05:14:05.425241 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-03 05:14:05.425257 | orchestrator | Tuesday 03 February 2026 05:13:46 +0000 (0:00:01.436) 0:03:22.745 ****** 2026-02-03 05:14:05.425274 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:14:05.425290 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:14:05.425307 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:14:05.425325 | orchestrator | 2026-02-03 05:14:05.425342 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-03 05:14:05.425359 | orchestrator | Tuesday 03 February 2026 05:13:48 +0000 (0:00:01.406) 0:03:24.151 ****** 2026-02-03 05:14:05.425376 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:14:05.425393 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:14:05.425410 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:14:05.425428 | orchestrator | 2026-02-03 05:14:05.425444 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-03 05:14:05.425477 | orchestrator | Tuesday 03 February 2026 05:13:49 +0000 (0:00:01.750) 0:03:25.902 ****** 2026-02-03 05:14:05.425497 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:14:05.425514 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:14:05.425576 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:14:05.425595 | orchestrator | 2026-02-03 05:14:05.425614 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-03 05:14:05.425632 | orchestrator | Tuesday 03 February 2026 05:13:52 +0000 (0:00:02.502) 0:03:28.404 ****** 2026-02-03 05:14:05.425651 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:14:05.425670 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:14:05.425689 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:14:05.425706 | orchestrator | 2026-02-03 05:14:05.425724 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-03 05:14:05.425742 | orchestrator | Tuesday 03 February 2026 05:13:54 +0000 (0:00:02.544) 0:03:30.949 ****** 2026-02-03 05:14:05.425759 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:14:05.425778 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:14:05.425796 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:14:05.425814 | orchestrator | 2026-02-03 05:14:05.425832 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-03 05:14:05.425849 | orchestrator | 2026-02-03 05:14:05.425867 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-03 05:14:05.425884 | orchestrator | Tuesday 03 February 2026 05:14:03 +0000 (0:00:08.253) 0:03:39.202 ****** 2026-02-03 05:14:05.425902 | orchestrator | ok: [testbed-manager] 2026-02-03 05:14:05.425919 | orchestrator | 2026-02-03 05:14:05.425936 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-03 05:14:05.425976 | orchestrator | Tuesday 03 February 2026 05:14:05 +0000 (0:00:02.316) 0:03:41.519 ****** 2026-02-03 05:15:19.985429 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985525 | orchestrator | 2026-02-03 05:15:19.985535 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-03 05:15:19.985551 | orchestrator | Tuesday 03 February 2026 05:14:06 +0000 (0:00:01.503) 0:03:43.023 ****** 2026-02-03 05:15:19.985558 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-03 05:15:19.985564 | orchestrator | 2026-02-03 05:15:19.985571 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-03 05:15:19.985577 | orchestrator | Tuesday 03 February 2026 05:14:08 +0000 (0:00:01.619) 0:03:44.642 ****** 2026-02-03 05:15:19.985583 | orchestrator | changed: [testbed-manager] 2026-02-03 05:15:19.985608 | orchestrator | 2026-02-03 05:15:19.985614 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-03 05:15:19.985620 | orchestrator | Tuesday 03 February 2026 05:14:10 +0000 (0:00:02.112) 0:03:46.754 ****** 2026-02-03 05:15:19.985626 | orchestrator | changed: [testbed-manager] 2026-02-03 05:15:19.985632 | orchestrator | 2026-02-03 05:15:19.985638 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-03 05:15:19.985655 | orchestrator | Tuesday 03 February 2026 05:14:12 +0000 (0:00:01.720) 0:03:48.475 ****** 2026-02-03 05:15:19.985661 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-03 05:15:19.985667 | orchestrator | 2026-02-03 05:15:19.985673 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-03 05:15:19.985678 | orchestrator | Tuesday 03 February 2026 05:14:15 +0000 (0:00:03.395) 0:03:51.871 ****** 2026-02-03 05:15:19.985684 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-03 05:15:19.985690 | orchestrator | 2026-02-03 05:15:19.985696 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-03 05:15:19.985701 | orchestrator | Tuesday 03 February 2026 05:14:17 +0000 (0:00:01.925) 0:03:53.796 ****** 2026-02-03 05:15:19.985707 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985713 | orchestrator | 2026-02-03 05:15:19.985719 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-03 05:15:19.985725 | orchestrator | Tuesday 03 February 2026 05:14:19 +0000 (0:00:01.627) 0:03:55.423 ****** 2026-02-03 05:15:19.985730 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985736 | orchestrator | 2026-02-03 05:15:19.985742 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-03 05:15:19.985748 | orchestrator | 2026-02-03 05:15:19.985753 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-03 05:15:19.985759 | orchestrator | Tuesday 03 February 2026 05:14:21 +0000 (0:00:01.785) 0:03:57.209 ****** 2026-02-03 05:15:19.985765 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985770 | orchestrator | 2026-02-03 05:15:19.985776 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-03 05:15:19.985782 | orchestrator | Tuesday 03 February 2026 05:14:22 +0000 (0:00:01.272) 0:03:58.481 ****** 2026-02-03 05:15:19.985788 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 05:15:19.985794 | orchestrator | 2026-02-03 05:15:19.985800 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-03 05:15:19.985805 | orchestrator | Tuesday 03 February 2026 05:14:23 +0000 (0:00:01.578) 0:04:00.059 ****** 2026-02-03 05:15:19.985811 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985817 | orchestrator | 2026-02-03 05:15:19.985823 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-03 05:15:19.985828 | orchestrator | Tuesday 03 February 2026 05:14:26 +0000 (0:00:02.162) 0:04:02.222 ****** 2026-02-03 05:15:19.985834 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985840 | orchestrator | 2026-02-03 05:15:19.985845 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-03 05:15:19.985851 | orchestrator | Tuesday 03 February 2026 05:14:28 +0000 (0:00:02.883) 0:04:05.106 ****** 2026-02-03 05:15:19.985857 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985863 | orchestrator | 2026-02-03 05:15:19.985868 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-03 05:15:19.985874 | orchestrator | Tuesday 03 February 2026 05:14:30 +0000 (0:00:01.531) 0:04:06.637 ****** 2026-02-03 05:15:19.985880 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985887 | orchestrator | 2026-02-03 05:15:19.985894 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-03 05:15:19.985901 | orchestrator | Tuesday 03 February 2026 05:14:32 +0000 (0:00:01.615) 0:04:08.253 ****** 2026-02-03 05:15:19.985908 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985915 | orchestrator | 2026-02-03 05:15:19.985921 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-03 05:15:19.985934 | orchestrator | Tuesday 03 February 2026 05:14:33 +0000 (0:00:01.823) 0:04:10.076 ****** 2026-02-03 05:15:19.985940 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985947 | orchestrator | 2026-02-03 05:15:19.985954 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-03 05:15:19.985960 | orchestrator | Tuesday 03 February 2026 05:14:36 +0000 (0:00:02.771) 0:04:12.848 ****** 2026-02-03 05:15:19.985967 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:19.985974 | orchestrator | 2026-02-03 05:15:19.985980 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-03 05:15:19.985987 | orchestrator | 2026-02-03 05:15:19.985994 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-03 05:15:19.986000 | orchestrator | Tuesday 03 February 2026 05:14:38 +0000 (0:00:01.875) 0:04:14.724 ****** 2026-02-03 05:15:19.986007 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:15:19.986014 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:15:19.986058 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:15:19.986064 | orchestrator | 2026-02-03 05:15:19.986071 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-03 05:15:19.986078 | orchestrator | Tuesday 03 February 2026 05:14:40 +0000 (0:00:01.475) 0:04:16.200 ****** 2026-02-03 05:15:19.986085 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:15:19.986091 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:15:19.986098 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:15:19.986105 | orchestrator | 2026-02-03 05:15:19.986124 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-03 05:15:19.986131 | orchestrator | Tuesday 03 February 2026 05:14:41 +0000 (0:00:01.689) 0:04:17.890 ****** 2026-02-03 05:15:19.986138 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:15:19.986145 | orchestrator | 2026-02-03 05:15:19.986151 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-03 05:15:19.986158 | orchestrator | Tuesday 03 February 2026 05:14:43 +0000 (0:00:01.782) 0:04:19.672 ****** 2026-02-03 05:15:19.986165 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-03 05:15:19.986171 | orchestrator | 2026-02-03 05:15:19.986178 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-03 05:15:19.986185 | orchestrator | Tuesday 03 February 2026 05:14:45 +0000 (0:00:02.002) 0:04:21.675 ****** 2026-02-03 05:15:19.986191 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 05:15:19.986198 | orchestrator | 2026-02-03 05:15:19.986205 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-03 05:15:19.986212 | orchestrator | Tuesday 03 February 2026 05:14:47 +0000 (0:00:01.952) 0:04:23.627 ****** 2026-02-03 05:15:19.986219 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:15:19.986226 | orchestrator | 2026-02-03 05:15:19.986233 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-03 05:15:19.986240 | orchestrator | Tuesday 03 February 2026 05:14:48 +0000 (0:00:01.156) 0:04:24.783 ****** 2026-02-03 05:15:19.986246 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 05:15:19.986252 | orchestrator | 2026-02-03 05:15:19.986258 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-03 05:15:19.986263 | orchestrator | Tuesday 03 February 2026 05:14:50 +0000 (0:00:02.098) 0:04:26.882 ****** 2026-02-03 05:15:19.986269 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 05:15:19.986294 | orchestrator | 2026-02-03 05:15:19.986300 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-03 05:15:19.986306 | orchestrator | Tuesday 03 February 2026 05:14:53 +0000 (0:00:02.342) 0:04:29.224 ****** 2026-02-03 05:15:19.986312 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 05:15:19.986317 | orchestrator | 2026-02-03 05:15:19.986323 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-03 05:15:19.986329 | orchestrator | Tuesday 03 February 2026 05:14:54 +0000 (0:00:01.261) 0:04:30.485 ****** 2026-02-03 05:15:19.986340 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 05:15:19.986346 | orchestrator | 2026-02-03 05:15:19.986351 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-03 05:15:19.986357 | orchestrator | Tuesday 03 February 2026 05:14:55 +0000 (0:00:01.337) 0:04:31.823 ****** 2026-02-03 05:15:19.986363 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-03 05:15:19.986369 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-03 05:15:19.986376 | orchestrator | } 2026-02-03 05:15:19.986382 | orchestrator | 2026-02-03 05:15:19.986387 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-03 05:15:19.986393 | orchestrator | Tuesday 03 February 2026 05:14:56 +0000 (0:00:01.211) 0:04:33.035 ****** 2026-02-03 05:15:19.986399 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:15:19.986404 | orchestrator | 2026-02-03 05:15:19.986410 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-03 05:15:19.986416 | orchestrator | Tuesday 03 February 2026 05:14:58 +0000 (0:00:01.301) 0:04:34.336 ****** 2026-02-03 05:15:19.986422 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-03 05:15:19.986428 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-03 05:15:19.986434 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-03 05:15:19.986439 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-03 05:15:19.986445 | orchestrator | 2026-02-03 05:15:19.986451 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-03 05:15:19.986457 | orchestrator | Tuesday 03 February 2026 05:15:04 +0000 (0:00:05.812) 0:04:40.148 ****** 2026-02-03 05:15:19.986462 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-03 05:15:19.986468 | orchestrator | 2026-02-03 05:15:19.986474 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-03 05:15:19.986479 | orchestrator | Tuesday 03 February 2026 05:15:06 +0000 (0:00:02.770) 0:04:42.919 ****** 2026-02-03 05:15:19.986485 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-03 05:15:19.986491 | orchestrator | 2026-02-03 05:15:19.986497 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-03 05:15:19.986502 | orchestrator | Tuesday 03 February 2026 05:15:09 +0000 (0:00:02.760) 0:04:45.679 ****** 2026-02-03 05:15:19.986508 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-03 05:15:19.986514 | orchestrator | 2026-02-03 05:15:19.986520 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-03 05:15:19.986531 | orchestrator | Tuesday 03 February 2026 05:15:13 +0000 (0:00:04.303) 0:04:49.983 ****** 2026-02-03 05:15:19.986537 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:15:19.986543 | orchestrator | 2026-02-03 05:15:19.986549 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-03 05:15:19.986555 | orchestrator | Tuesday 03 February 2026 05:15:15 +0000 (0:00:01.180) 0:04:51.164 ****** 2026-02-03 05:15:19.986560 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-03 05:15:19.986567 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-03 05:15:19.986572 | orchestrator | 2026-02-03 05:15:19.986578 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-03 05:15:19.986584 | orchestrator | Tuesday 03 February 2026 05:15:18 +0000 (0:00:03.462) 0:04:54.626 ****** 2026-02-03 05:15:19.986590 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:15:19.986600 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:15:48.041639 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:15:48.041724 | orchestrator | 2026-02-03 05:15:48.041733 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-03 05:15:48.041741 | orchestrator | Tuesday 03 February 2026 05:15:19 +0000 (0:00:01.459) 0:04:56.086 ****** 2026-02-03 05:15:48.041765 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:15:48.041775 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:15:48.041782 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:15:48.041792 | orchestrator | 2026-02-03 05:15:48.041798 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-03 05:15:48.041803 | orchestrator | 2026-02-03 05:15:48.041808 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-03 05:15:48.041814 | orchestrator | Tuesday 03 February 2026 05:15:22 +0000 (0:00:02.159) 0:04:58.245 ****** 2026-02-03 05:15:48.041819 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:48.041824 | orchestrator | 2026-02-03 05:15:48.041829 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-03 05:15:48.041835 | orchestrator | Tuesday 03 February 2026 05:15:23 +0000 (0:00:01.157) 0:04:59.403 ****** 2026-02-03 05:15:48.041851 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-03 05:15:48.041857 | orchestrator | 2026-02-03 05:15:48.041862 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-03 05:15:48.041868 | orchestrator | Tuesday 03 February 2026 05:15:24 +0000 (0:00:01.493) 0:05:00.897 ****** 2026-02-03 05:15:48.041873 | orchestrator | ok: [testbed-manager] 2026-02-03 05:15:48.041878 | orchestrator | 2026-02-03 05:15:48.041883 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-03 05:15:48.041888 | orchestrator | 2026-02-03 05:15:48.041893 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-03 05:15:48.041898 | orchestrator | Tuesday 03 February 2026 05:15:30 +0000 (0:00:05.714) 0:05:06.611 ****** 2026-02-03 05:15:48.041904 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:15:48.041909 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:15:48.041914 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:15:48.041919 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:15:48.041924 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:15:48.041929 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:15:48.041934 | orchestrator | 2026-02-03 05:15:48.041939 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-03 05:15:48.041945 | orchestrator | Tuesday 03 February 2026 05:15:32 +0000 (0:00:02.047) 0:05:08.659 ****** 2026-02-03 05:15:48.041950 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-03 05:15:48.041955 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-03 05:15:48.041963 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-03 05:15:48.041971 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-03 05:15:48.041979 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-03 05:15:48.041987 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-03 05:15:48.041995 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-03 05:15:48.042003 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-03 05:15:48.042057 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-03 05:15:48.042069 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-03 05:15:48.042078 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-03 05:15:48.042085 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-03 05:15:48.042090 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-03 05:15:48.042096 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-03 05:15:48.042101 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-03 05:15:48.042112 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-03 05:15:48.042117 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-03 05:15:48.042122 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-03 05:15:48.042127 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-03 05:15:48.042132 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-03 05:15:48.042138 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-03 05:15:48.042143 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-03 05:15:48.042148 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-03 05:15:48.042153 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-03 05:15:48.042158 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-03 05:15:48.042163 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-03 05:15:48.042181 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-03 05:15:48.042186 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-03 05:15:48.042215 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-03 05:15:48.042222 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-03 05:15:48.042228 | orchestrator | 2026-02-03 05:15:48.042234 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-03 05:15:48.042240 | orchestrator | Tuesday 03 February 2026 05:15:43 +0000 (0:00:10.666) 0:05:19.325 ****** 2026-02-03 05:15:48.042246 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:15:48.042253 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:15:48.042259 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:15:48.042265 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:15:48.042271 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:15:48.042277 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:15:48.042283 | orchestrator | 2026-02-03 05:15:48.042289 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-03 05:15:48.042299 | orchestrator | Tuesday 03 February 2026 05:15:45 +0000 (0:00:02.083) 0:05:21.409 ****** 2026-02-03 05:15:48.042305 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:15:48.042311 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:15:48.042318 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:15:48.042324 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:15:48.042330 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:15:48.042336 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:15:48.042342 | orchestrator | 2026-02-03 05:15:48.042348 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:15:48.042355 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 05:15:48.042363 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-03 05:15:48.042369 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-03 05:15:48.042376 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-03 05:15:48.042382 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 05:15:48.042392 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 05:15:48.042398 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-03 05:15:48.042405 | orchestrator | 2026-02-03 05:15:48.042411 | orchestrator | 2026-02-03 05:15:48.042417 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:15:48.042423 | orchestrator | Tuesday 03 February 2026 05:15:48 +0000 (0:00:02.709) 0:05:24.119 ****** 2026-02-03 05:15:48.042429 | orchestrator | =============================================================================== 2026-02-03 05:15:48.042435 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 54.11s 2026-02-03 05:15:48.042441 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.61s 2026-02-03 05:15:48.042448 | orchestrator | Manage labels ---------------------------------------------------------- 10.67s 2026-02-03 05:15:48.042454 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.25s 2026-02-03 05:15:48.042460 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.81s 2026-02-03 05:15:48.042466 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.71s 2026-02-03 05:15:48.042472 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 5.01s 2026-02-03 05:15:48.042478 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.57s 2026-02-03 05:15:48.042487 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.30s 2026-02-03 05:15:48.042496 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.46s 2026-02-03 05:15:48.042515 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.40s 2026-02-03 05:15:48.042524 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.16s 2026-02-03 05:15:48.042532 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.88s 2026-02-03 05:15:48.042549 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.82s 2026-02-03 05:15:48.042558 | orchestrator | kubectl : Install required packages ------------------------------------- 2.77s 2026-02-03 05:15:48.042564 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.77s 2026-02-03 05:15:48.042569 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.76s 2026-02-03 05:15:48.042574 | orchestrator | Manage taints ----------------------------------------------------------- 2.71s 2026-02-03 05:15:48.042584 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.67s 2026-02-03 05:15:48.583968 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.65s 2026-02-03 05:15:48.959557 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-03 05:15:48.959674 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-03 05:15:48.971217 | orchestrator | + set -e 2026-02-03 05:15:48.971308 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 05:15:48.971321 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 05:15:48.971333 | orchestrator | ++ INTERACTIVE=false 2026-02-03 05:15:48.971343 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 05:15:48.971352 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 05:15:48.971363 | orchestrator | + osism apply openstackclient 2026-02-03 05:16:01.431691 | orchestrator | 2026-02-03 05:16:01 | INFO  | Task ffb12ef0-f3bb-4fb7-a21c-16b7ad876ab8 (openstackclient) was prepared for execution. 2026-02-03 05:16:01.431832 | orchestrator | 2026-02-03 05:16:01 | INFO  | It takes a moment until task ffb12ef0-f3bb-4fb7-a21c-16b7ad876ab8 (openstackclient) has been started and output is visible here. 2026-02-03 05:16:38.980885 | orchestrator | 2026-02-03 05:16:38.981174 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-03 05:16:38.981204 | orchestrator | 2026-02-03 05:16:38.981246 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-03 05:16:38.981264 | orchestrator | Tuesday 03 February 2026 05:16:08 +0000 (0:00:02.086) 0:00:02.086 ****** 2026-02-03 05:16:38.981282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-03 05:16:38.981300 | orchestrator | 2026-02-03 05:16:38.981316 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-03 05:16:38.981333 | orchestrator | Tuesday 03 February 2026 05:16:10 +0000 (0:00:01.854) 0:00:03.940 ****** 2026-02-03 05:16:38.981349 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-03 05:16:38.981398 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-03 05:16:38.981415 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-03 05:16:38.981432 | orchestrator | 2026-02-03 05:16:38.981448 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-03 05:16:38.981464 | orchestrator | Tuesday 03 February 2026 05:16:12 +0000 (0:00:02.447) 0:00:06.388 ****** 2026-02-03 05:16:38.981481 | orchestrator | changed: [testbed-manager] 2026-02-03 05:16:38.981499 | orchestrator | 2026-02-03 05:16:38.981516 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-03 05:16:38.981532 | orchestrator | Tuesday 03 February 2026 05:16:15 +0000 (0:00:02.364) 0:00:08.752 ****** 2026-02-03 05:16:38.981548 | orchestrator | ok: [testbed-manager] 2026-02-03 05:16:38.981565 | orchestrator | 2026-02-03 05:16:38.981579 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-03 05:16:38.981592 | orchestrator | Tuesday 03 February 2026 05:16:17 +0000 (0:00:02.246) 0:00:10.998 ****** 2026-02-03 05:16:38.981607 | orchestrator | ok: [testbed-manager] 2026-02-03 05:16:38.981625 | orchestrator | 2026-02-03 05:16:38.981643 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-03 05:16:38.981659 | orchestrator | Tuesday 03 February 2026 05:16:19 +0000 (0:00:02.158) 0:00:13.157 ****** 2026-02-03 05:16:38.981675 | orchestrator | ok: [testbed-manager] 2026-02-03 05:16:38.981691 | orchestrator | 2026-02-03 05:16:38.981707 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-03 05:16:38.981723 | orchestrator | Tuesday 03 February 2026 05:16:21 +0000 (0:00:01.550) 0:00:14.707 ****** 2026-02-03 05:16:38.981739 | orchestrator | changed: [testbed-manager] 2026-02-03 05:16:38.981753 | orchestrator | 2026-02-03 05:16:38.981768 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-03 05:16:38.981783 | orchestrator | Tuesday 03 February 2026 05:16:32 +0000 (0:00:11.760) 0:00:26.467 ****** 2026-02-03 05:16:38.981798 | orchestrator | changed: [testbed-manager] 2026-02-03 05:16:38.981814 | orchestrator | 2026-02-03 05:16:38.981829 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-03 05:16:38.981845 | orchestrator | Tuesday 03 February 2026 05:16:34 +0000 (0:00:02.151) 0:00:28.618 ****** 2026-02-03 05:16:38.981861 | orchestrator | changed: [testbed-manager] 2026-02-03 05:16:38.981878 | orchestrator | 2026-02-03 05:16:38.981894 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-03 05:16:38.981911 | orchestrator | Tuesday 03 February 2026 05:16:36 +0000 (0:00:01.689) 0:00:30.308 ****** 2026-02-03 05:16:38.981928 | orchestrator | ok: [testbed-manager] 2026-02-03 05:16:38.981943 | orchestrator | 2026-02-03 05:16:38.981959 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:16:38.981976 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-03 05:16:38.981993 | orchestrator | 2026-02-03 05:16:38.982113 | orchestrator | 2026-02-03 05:16:38.982136 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:16:38.982155 | orchestrator | Tuesday 03 February 2026 05:16:38 +0000 (0:00:01.968) 0:00:32.277 ****** 2026-02-03 05:16:38.982172 | orchestrator | =============================================================================== 2026-02-03 05:16:38.982188 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 11.76s 2026-02-03 05:16:38.982199 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.45s 2026-02-03 05:16:38.982208 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.36s 2026-02-03 05:16:38.982216 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.25s 2026-02-03 05:16:38.982224 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.16s 2026-02-03 05:16:38.982232 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.15s 2026-02-03 05:16:38.982240 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.97s 2026-02-03 05:16:38.982248 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.85s 2026-02-03 05:16:38.982256 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.69s 2026-02-03 05:16:38.982264 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.55s 2026-02-03 05:16:39.360915 | orchestrator | + osism apply -a upgrade common 2026-02-03 05:16:41.666677 | orchestrator | 2026-02-03 05:16:41 | INFO  | Task 2bd49bd3-fcfa-4bdb-bea7-2228569854d0 (common) was prepared for execution. 2026-02-03 05:16:41.666782 | orchestrator | 2026-02-03 05:16:41 | INFO  | It takes a moment until task 2bd49bd3-fcfa-4bdb-bea7-2228569854d0 (common) has been started and output is visible here. 2026-02-03 05:17:03.473320 | orchestrator | 2026-02-03 05:17:03.473428 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-03 05:17:03.473444 | orchestrator | 2026-02-03 05:17:03.473476 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-03 05:17:03.473488 | orchestrator | Tuesday 03 February 2026 05:16:48 +0000 (0:00:02.374) 0:00:02.374 ****** 2026-02-03 05:17:03.473499 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 05:17:03.473512 | orchestrator | 2026-02-03 05:17:03.473523 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-03 05:17:03.473534 | orchestrator | Tuesday 03 February 2026 05:16:52 +0000 (0:00:03.714) 0:00:06.089 ****** 2026-02-03 05:17:03.473545 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 05:17:03.473556 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 05:17:03.473567 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 05:17:03.473578 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 05:17:03.473590 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 05:17:03.473601 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 05:17:03.473612 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 05:17:03.473623 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 05:17:03.473634 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 05:17:03.473645 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 05:17:03.473655 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 05:17:03.473666 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 05:17:03.473677 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 05:17:03.473712 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 05:17:03.473724 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 05:17:03.473735 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 05:17:03.473746 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-03 05:17:03.473756 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 05:17:03.473767 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-03 05:17:03.473777 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 05:17:03.473788 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-03 05:17:03.473799 | orchestrator | 2026-02-03 05:17:03.473810 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-03 05:17:03.473820 | orchestrator | Tuesday 03 February 2026 05:16:57 +0000 (0:00:05.047) 0:00:11.137 ****** 2026-02-03 05:17:03.473835 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 05:17:03.473850 | orchestrator | 2026-02-03 05:17:03.473863 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-03 05:17:03.473876 | orchestrator | Tuesday 03 February 2026 05:17:00 +0000 (0:00:03.062) 0:00:14.200 ****** 2026-02-03 05:17:03.473894 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:03.473917 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:03.473956 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:03.473971 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:03.473984 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:03.474092 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:03.474106 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:03.474292 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:03.474306 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:03.474329 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280291 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280523 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280557 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280578 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280598 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280618 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:08.280642 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:08.280661 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280726 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280762 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280782 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:08.280802 | orchestrator | 2026-02-03 05:17:08.280825 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-03 05:17:08.280845 | orchestrator | Tuesday 03 February 2026 05:17:07 +0000 (0:00:06.840) 0:00:21.040 ****** 2026-02-03 05:17:08.280866 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:08.280891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:08.280913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:08.280932 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:08.281007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:10.383521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383611 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383618 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:17:10.383625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:10.383674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:10.383681 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:17:10.383686 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:17:10.383692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:10.383738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383743 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:17:10.383748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383754 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:17:10.383759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:10.383770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:10.383786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714458 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:17:13.714547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714562 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:17:13.714570 | orchestrator | 2026-02-03 05:17:13.714579 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-03 05:17:13.714588 | orchestrator | Tuesday 03 February 2026 05:17:10 +0000 (0:00:02.931) 0:00:23.971 ****** 2026-02-03 05:17:13.714597 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:13.714608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:13.714616 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:13.714673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714703 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714712 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:17:13.714719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714727 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:17:13.714734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:13.714742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714750 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:17:13.714757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:13.714772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:13.714793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:26.863975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:17:26.864074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:26.864089 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:17:26.864101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:26.864111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:26.864139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:26.864148 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:17:26.864157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:26.864165 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:17:26.864178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:26.864186 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:17:26.864195 | orchestrator | 2026-02-03 05:17:26.864203 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-03 05:17:26.864213 | orchestrator | Tuesday 03 February 2026 05:17:13 +0000 (0:00:03.330) 0:00:27.302 ****** 2026-02-03 05:17:26.864221 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:17:26.864229 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:17:26.864237 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:17:26.864245 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:17:26.864267 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:17:26.864276 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:17:26.864284 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:17:26.864292 | orchestrator | 2026-02-03 05:17:26.864300 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-03 05:17:26.864308 | orchestrator | Tuesday 03 February 2026 05:17:16 +0000 (0:00:02.442) 0:00:29.744 ****** 2026-02-03 05:17:26.864315 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:17:26.864323 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:17:26.864331 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:17:26.864339 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:17:26.864346 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:17:26.864354 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:17:26.864362 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:17:26.864369 | orchestrator | 2026-02-03 05:17:26.864377 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-03 05:17:26.864385 | orchestrator | Tuesday 03 February 2026 05:17:18 +0000 (0:00:02.238) 0:00:31.983 ****** 2026-02-03 05:17:26.864393 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:17:26.864400 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:17:26.864408 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:17:26.864416 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:17:26.864424 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:17:26.864437 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:17:26.864445 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:17:26.864453 | orchestrator | 2026-02-03 05:17:26.864461 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-03 05:17:26.864469 | orchestrator | Tuesday 03 February 2026 05:17:20 +0000 (0:00:02.151) 0:00:34.134 ****** 2026-02-03 05:17:26.864477 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:17:26.864484 | orchestrator | changed: [testbed-manager] 2026-02-03 05:17:26.864492 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:17:26.864500 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:17:26.864507 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:17:26.864515 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:17:26.864523 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:17:26.864530 | orchestrator | 2026-02-03 05:17:26.864538 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-03 05:17:26.864548 | orchestrator | Tuesday 03 February 2026 05:17:23 +0000 (0:00:03.379) 0:00:37.514 ****** 2026-02-03 05:17:26.864561 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:26.864574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:26.864583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:26.864596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:26.864611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:31.380460 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380611 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380660 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:31.380675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:31.380683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:31.380719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:55.901652 | orchestrator | 2026-02-03 05:17:55.901770 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-03 05:17:55.901788 | orchestrator | Tuesday 03 February 2026 05:17:31 +0000 (0:00:07.452) 0:00:44.967 ****** 2026-02-03 05:17:55.901800 | orchestrator | [WARNING]: Skipped 2026-02-03 05:17:55.901814 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-03 05:17:55.901826 | orchestrator | to this access issue: 2026-02-03 05:17:55.901838 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-03 05:17:55.901849 | orchestrator | directory 2026-02-03 05:17:55.901885 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 05:17:55.901908 | orchestrator | 2026-02-03 05:17:55.901928 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-03 05:17:55.901947 | orchestrator | Tuesday 03 February 2026 05:17:34 +0000 (0:00:02.650) 0:00:47.617 ****** 2026-02-03 05:17:55.901963 | orchestrator | [WARNING]: Skipped 2026-02-03 05:17:55.901974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-03 05:17:55.901985 | orchestrator | to this access issue: 2026-02-03 05:17:55.901996 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-03 05:17:55.902007 | orchestrator | directory 2026-02-03 05:17:55.902072 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 05:17:55.902085 | orchestrator | 2026-02-03 05:17:55.902096 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-03 05:17:55.902107 | orchestrator | Tuesday 03 February 2026 05:17:36 +0000 (0:00:02.219) 0:00:49.836 ****** 2026-02-03 05:17:55.902118 | orchestrator | [WARNING]: Skipped 2026-02-03 05:17:55.902129 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-03 05:17:55.902141 | orchestrator | to this access issue: 2026-02-03 05:17:55.902152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-03 05:17:55.902163 | orchestrator | directory 2026-02-03 05:17:55.902176 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 05:17:55.902190 | orchestrator | 2026-02-03 05:17:55.902204 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-03 05:17:55.902218 | orchestrator | Tuesday 03 February 2026 05:17:38 +0000 (0:00:02.002) 0:00:51.839 ****** 2026-02-03 05:17:55.902238 | orchestrator | [WARNING]: Skipped 2026-02-03 05:17:55.902254 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-03 05:17:55.902265 | orchestrator | to this access issue: 2026-02-03 05:17:55.902276 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-03 05:17:55.902287 | orchestrator | directory 2026-02-03 05:17:55.902298 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-03 05:17:55.902309 | orchestrator | 2026-02-03 05:17:55.902321 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-03 05:17:55.902331 | orchestrator | Tuesday 03 February 2026 05:17:40 +0000 (0:00:01.924) 0:00:53.764 ****** 2026-02-03 05:17:55.902342 | orchestrator | changed: [testbed-manager] 2026-02-03 05:17:55.902353 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:17:55.902390 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:17:55.902402 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:17:55.902412 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:17:55.902423 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:17:55.902434 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:17:55.902445 | orchestrator | 2026-02-03 05:17:55.902455 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-03 05:17:55.902466 | orchestrator | Tuesday 03 February 2026 05:17:45 +0000 (0:00:05.014) 0:00:58.778 ****** 2026-02-03 05:17:55.902477 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 05:17:55.902490 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 05:17:55.902501 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 05:17:55.902528 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 05:17:55.902539 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 05:17:55.902550 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 05:17:55.902561 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-03 05:17:55.902572 | orchestrator | 2026-02-03 05:17:55.902583 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-03 05:17:55.902594 | orchestrator | Tuesday 03 February 2026 05:17:49 +0000 (0:00:03.990) 0:01:02.769 ****** 2026-02-03 05:17:55.902605 | orchestrator | ok: [testbed-manager] 2026-02-03 05:17:55.902616 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:17:55.902627 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:17:55.902637 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:17:55.902648 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:17:55.902659 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:17:55.902670 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:17:55.902680 | orchestrator | 2026-02-03 05:17:55.902691 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-03 05:17:55.902707 | orchestrator | Tuesday 03 February 2026 05:17:52 +0000 (0:00:03.399) 0:01:06.169 ****** 2026-02-03 05:17:55.902756 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:55.902782 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:55.902800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:55.902834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:55.902853 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:55.902921 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:17:55.902945 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:17:55.902968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:17:55.903003 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:05.957157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:05.957292 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:05.957307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:05.957319 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:05.957346 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:05.957357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:05.957368 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:05.957396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:05.957416 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:05.957427 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:05.957437 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:05.957447 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:05.957458 | orchestrator | 2026-02-03 05:18:05.957469 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-03 05:18:05.957480 | orchestrator | Tuesday 03 February 2026 05:17:55 +0000 (0:00:03.325) 0:01:09.495 ****** 2026-02-03 05:18:05.957490 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 05:18:05.957501 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 05:18:05.957512 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 05:18:05.957529 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 05:18:05.957547 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 05:18:05.957564 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 05:18:05.957581 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-03 05:18:05.957600 | orchestrator | 2026-02-03 05:18:05.957617 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-03 05:18:05.957628 | orchestrator | Tuesday 03 February 2026 05:17:59 +0000 (0:00:03.770) 0:01:13.265 ****** 2026-02-03 05:18:05.957637 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 05:18:05.957647 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 05:18:05.957657 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 05:18:05.957666 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 05:18:05.957676 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 05:18:05.957687 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 05:18:05.957706 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-03 05:18:05.957717 | orchestrator | 2026-02-03 05:18:05.957728 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-03 05:18:05.957741 | orchestrator | Tuesday 03 February 2026 05:18:03 +0000 (0:00:03.976) 0:01:17.241 ****** 2026-02-03 05:18:05.957769 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:09.589717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:09.589825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:09.589893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:09.589927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:09.589966 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.589979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.590083 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.590121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.590135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.590147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.590165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.590177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.590189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:09.590210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.590221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:09.590240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-03 05:18:12.907717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:12.907852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:12.907896 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:12.907913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:18:12.907945 | orchestrator | 2026-02-03 05:18:12.907953 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-03 05:18:12.907961 | orchestrator | Tuesday 03 February 2026 05:18:09 +0000 (0:00:05.939) 0:01:23.181 ****** 2026-02-03 05:18:12.907969 | orchestrator | changed: [testbed-manager] => { 2026-02-03 05:18:12.907977 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:18:12.907984 | orchestrator | } 2026-02-03 05:18:12.907991 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:18:12.907998 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:18:12.908004 | orchestrator | } 2026-02-03 05:18:12.908011 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:18:12.908017 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:18:12.908024 | orchestrator | } 2026-02-03 05:18:12.908032 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:18:12.908044 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:18:12.908055 | orchestrator | } 2026-02-03 05:18:12.908065 | orchestrator | changed: [testbed-node-3] => { 2026-02-03 05:18:12.908076 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:18:12.908086 | orchestrator | } 2026-02-03 05:18:12.908097 | orchestrator | changed: [testbed-node-4] => { 2026-02-03 05:18:12.908109 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:18:12.908120 | orchestrator | } 2026-02-03 05:18:12.908130 | orchestrator | changed: [testbed-node-5] => { 2026-02-03 05:18:12.908141 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:18:12.908150 | orchestrator | } 2026-02-03 05:18:12.908159 | orchestrator | 2026-02-03 05:18:12.908166 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:18:12.908172 | orchestrator | Tuesday 03 February 2026 05:18:12 +0000 (0:00:02.668) 0:01:25.850 ****** 2026-02-03 05:18:12.908179 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:18:12.908205 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:12.908217 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:12.908228 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:18:12.908240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:18:12.908265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:12.908277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:12.908288 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:18:12.908299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:18:12.908309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:12.908320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:18:12.908339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:20:23.852354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:20:23.852518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:20:23.852538 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:20:23.852550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:20:23.852563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:20:23.852653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:20:23.852672 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:20:23.852688 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:20:23.852704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:20:23.852723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:20:23.852766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:20:23.852801 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:20:23.852815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-03 05:20:23.852831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:20:23.852841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:20:23.852851 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:20:23.852861 | orchestrator | 2026-02-03 05:20:23.852872 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 05:20:23.852884 | orchestrator | Tuesday 03 February 2026 05:18:15 +0000 (0:00:03.083) 0:01:28.933 ****** 2026-02-03 05:20:23.852895 | orchestrator | 2026-02-03 05:20:23.852906 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 05:20:23.852917 | orchestrator | Tuesday 03 February 2026 05:18:15 +0000 (0:00:00.459) 0:01:29.393 ****** 2026-02-03 05:20:23.852928 | orchestrator | 2026-02-03 05:20:23.852939 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 05:20:23.852949 | orchestrator | Tuesday 03 February 2026 05:18:16 +0000 (0:00:00.437) 0:01:29.830 ****** 2026-02-03 05:20:23.852959 | orchestrator | 2026-02-03 05:20:23.852968 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 05:20:23.852977 | orchestrator | Tuesday 03 February 2026 05:18:16 +0000 (0:00:00.459) 0:01:30.290 ****** 2026-02-03 05:20:23.852987 | orchestrator | 2026-02-03 05:20:23.852996 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 05:20:23.853006 | orchestrator | Tuesday 03 February 2026 05:18:17 +0000 (0:00:00.442) 0:01:30.733 ****** 2026-02-03 05:20:23.853015 | orchestrator | 2026-02-03 05:20:23.853025 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 05:20:23.853035 | orchestrator | Tuesday 03 February 2026 05:18:17 +0000 (0:00:00.713) 0:01:31.446 ****** 2026-02-03 05:20:23.853044 | orchestrator | 2026-02-03 05:20:23.853054 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-03 05:20:23.853063 | orchestrator | Tuesday 03 February 2026 05:18:18 +0000 (0:00:00.567) 0:01:32.014 ****** 2026-02-03 05:20:23.853073 | orchestrator | 2026-02-03 05:20:23.853082 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-03 05:20:23.853092 | orchestrator | Tuesday 03 February 2026 05:18:19 +0000 (0:00:00.903) 0:01:32.917 ****** 2026-02-03 05:20:23.853101 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:20:23.853111 | orchestrator | changed: [testbed-manager] 2026-02-03 05:20:23.853127 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:20:23.853137 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:20:23.853146 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:20:23.853156 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:20:23.853165 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:20:23.853174 | orchestrator | 2026-02-03 05:20:23.853184 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-03 05:20:23.853193 | orchestrator | Tuesday 03 February 2026 05:19:21 +0000 (0:01:02.297) 0:02:35.215 ****** 2026-02-03 05:20:23.853203 | orchestrator | changed: [testbed-manager] 2026-02-03 05:20:23.853212 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:20:23.853222 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:20:23.853231 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:20:23.853241 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:20:23.853250 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:20:23.853260 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:20:23.853269 | orchestrator | 2026-02-03 05:20:23.853287 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-03 05:20:48.547518 | orchestrator | Tuesday 03 February 2026 05:20:23 +0000 (0:01:02.221) 0:03:37.436 ****** 2026-02-03 05:20:48.547696 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:20:48.547711 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:20:48.547723 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:20:48.547734 | orchestrator | ok: [testbed-manager] 2026-02-03 05:20:48.547745 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:20:48.547756 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:20:48.547767 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:20:48.547778 | orchestrator | 2026-02-03 05:20:48.547790 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-03 05:20:48.547801 | orchestrator | Tuesday 03 February 2026 05:20:27 +0000 (0:00:03.858) 0:03:41.296 ****** 2026-02-03 05:20:48.547812 | orchestrator | changed: [testbed-manager] 2026-02-03 05:20:48.547823 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:20:48.547834 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:20:48.547845 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:20:48.547856 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:20:48.547867 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:20:48.547877 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:20:48.547889 | orchestrator | 2026-02-03 05:20:48.547899 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:20:48.547945 | orchestrator | testbed-manager : ok=22  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:20:48.547971 | orchestrator | testbed-node-0 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:20:48.547983 | orchestrator | testbed-node-1 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:20:48.547993 | orchestrator | testbed-node-2 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:20:48.548004 | orchestrator | testbed-node-3 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:20:48.548015 | orchestrator | testbed-node-4 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:20:48.548026 | orchestrator | testbed-node-5 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:20:48.548037 | orchestrator | 2026-02-03 05:20:48.548048 | orchestrator | 2026-02-03 05:20:48.548062 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:20:48.548102 | orchestrator | Tuesday 03 February 2026 05:20:47 +0000 (0:00:20.268) 0:04:01.564 ****** 2026-02-03 05:20:48.548115 | orchestrator | =============================================================================== 2026-02-03 05:20:48.548129 | orchestrator | common : Restart fluentd container ------------------------------------- 62.30s 2026-02-03 05:20:48.548141 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 62.22s 2026-02-03 05:20:48.548151 | orchestrator | common : Restart cron container ---------------------------------------- 20.27s 2026-02-03 05:20:48.548162 | orchestrator | common : Copying over config.json files for services -------------------- 7.45s 2026-02-03 05:20:48.548173 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.84s 2026-02-03 05:20:48.548184 | orchestrator | service-check-containers : common | Check containers -------------------- 5.94s 2026-02-03 05:20:48.548194 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.05s 2026-02-03 05:20:48.548205 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.01s 2026-02-03 05:20:48.548216 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.99s 2026-02-03 05:20:48.548226 | orchestrator | common : Flush handlers ------------------------------------------------- 3.98s 2026-02-03 05:20:48.548237 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.98s 2026-02-03 05:20:48.548248 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.86s 2026-02-03 05:20:48.548258 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.77s 2026-02-03 05:20:48.548269 | orchestrator | common : include_tasks -------------------------------------------------- 3.71s 2026-02-03 05:20:48.548280 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.40s 2026-02-03 05:20:48.548291 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.38s 2026-02-03 05:20:48.548302 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.33s 2026-02-03 05:20:48.548313 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.33s 2026-02-03 05:20:48.548324 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.08s 2026-02-03 05:20:48.548335 | orchestrator | common : include_tasks -------------------------------------------------- 3.06s 2026-02-03 05:20:48.911287 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-03 05:20:51.197048 | orchestrator | 2026-02-03 05:20:51 | INFO  | Task 8ece4969-1497-4118-8845-8ff227d6dbdd (loadbalancer) was prepared for execution. 2026-02-03 05:20:51.197139 | orchestrator | 2026-02-03 05:20:51 | INFO  | It takes a moment until task 8ece4969-1497-4118-8845-8ff227d6dbdd (loadbalancer) has been started and output is visible here. 2026-02-03 05:21:28.324382 | orchestrator | 2026-02-03 05:21:28.324531 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 05:21:28.324545 | orchestrator | 2026-02-03 05:21:28.324553 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 05:21:28.324560 | orchestrator | Tuesday 03 February 2026 05:20:57 +0000 (0:00:01.801) 0:00:01.801 ****** 2026-02-03 05:21:28.324568 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:21:28.324577 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:21:28.324584 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:21:28.324592 | orchestrator | 2026-02-03 05:21:28.324599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 05:21:28.324607 | orchestrator | Tuesday 03 February 2026 05:20:59 +0000 (0:00:01.808) 0:00:03.610 ****** 2026-02-03 05:21:28.324615 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-03 05:21:28.324623 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-03 05:21:28.324630 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-03 05:21:28.324638 | orchestrator | 2026-02-03 05:21:28.324645 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-03 05:21:28.324670 | orchestrator | 2026-02-03 05:21:28.324678 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-03 05:21:28.324699 | orchestrator | Tuesday 03 February 2026 05:21:01 +0000 (0:00:01.774) 0:00:05.384 ****** 2026-02-03 05:21:28.324707 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:21:28.324715 | orchestrator | 2026-02-03 05:21:28.324722 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-03 05:21:28.324729 | orchestrator | Tuesday 03 February 2026 05:21:04 +0000 (0:00:02.827) 0:00:08.212 ****** 2026-02-03 05:21:28.324737 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:21:28.324744 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:21:28.324751 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:21:28.324758 | orchestrator | 2026-02-03 05:21:28.324766 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-03 05:21:28.324773 | orchestrator | Tuesday 03 February 2026 05:21:06 +0000 (0:00:02.233) 0:00:10.446 ****** 2026-02-03 05:21:28.324780 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:21:28.324787 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:21:28.324795 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:21:28.324802 | orchestrator | 2026-02-03 05:21:28.324809 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-03 05:21:28.324816 | orchestrator | Tuesday 03 February 2026 05:21:08 +0000 (0:00:02.275) 0:00:12.721 ****** 2026-02-03 05:21:28.324823 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:21:28.324831 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:21:28.324838 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:21:28.324845 | orchestrator | 2026-02-03 05:21:28.324852 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-03 05:21:28.324859 | orchestrator | Tuesday 03 February 2026 05:21:11 +0000 (0:00:02.748) 0:00:15.470 ****** 2026-02-03 05:21:28.324867 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:21:28.324874 | orchestrator | 2026-02-03 05:21:28.324881 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-03 05:21:28.324889 | orchestrator | Tuesday 03 February 2026 05:21:13 +0000 (0:00:02.083) 0:00:17.553 ****** 2026-02-03 05:21:28.324896 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:21:28.324903 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:21:28.324911 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:21:28.324918 | orchestrator | 2026-02-03 05:21:28.324925 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-03 05:21:28.324933 | orchestrator | Tuesday 03 February 2026 05:21:15 +0000 (0:00:01.845) 0:00:19.398 ****** 2026-02-03 05:21:28.324942 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-03 05:21:28.324951 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-03 05:21:28.324960 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-03 05:21:28.324968 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-03 05:21:28.324976 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-03 05:21:28.324985 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-03 05:21:28.324993 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-03 05:21:28.325003 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-03 05:21:28.325011 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-03 05:21:28.325020 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-03 05:21:28.325028 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-03 05:21:28.325044 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-03 05:21:28.325052 | orchestrator | 2026-02-03 05:21:28.325061 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-03 05:21:28.325069 | orchestrator | Tuesday 03 February 2026 05:21:18 +0000 (0:00:03.425) 0:00:22.823 ****** 2026-02-03 05:21:28.325077 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-03 05:21:28.325086 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-03 05:21:28.325095 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-03 05:21:28.325103 | orchestrator | 2026-02-03 05:21:28.325112 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-03 05:21:28.325135 | orchestrator | Tuesday 03 February 2026 05:21:20 +0000 (0:00:02.072) 0:00:24.896 ****** 2026-02-03 05:21:28.325144 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-03 05:21:28.325153 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-03 05:21:28.325161 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-03 05:21:28.325170 | orchestrator | 2026-02-03 05:21:28.325178 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-03 05:21:28.325187 | orchestrator | Tuesday 03 February 2026 05:21:23 +0000 (0:00:02.521) 0:00:27.418 ****** 2026-02-03 05:21:28.325196 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-03 05:21:28.325205 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:21:28.325213 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-03 05:21:28.325221 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:21:28.325230 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-03 05:21:28.325238 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:21:28.325247 | orchestrator | 2026-02-03 05:21:28.325255 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-03 05:21:28.325263 | orchestrator | Tuesday 03 February 2026 05:21:25 +0000 (0:00:02.187) 0:00:29.606 ****** 2026-02-03 05:21:28.325278 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 05:21:28.325291 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 05:21:28.325311 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 05:21:28.325329 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:21:28.325346 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:21:28.325359 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:21:40.111803 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:21:40.111911 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:21:40.111929 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:21:40.111942 | orchestrator | 2026-02-03 05:21:40.111955 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-03 05:21:40.111968 | orchestrator | Tuesday 03 February 2026 05:21:28 +0000 (0:00:02.890) 0:00:32.496 ****** 2026-02-03 05:21:40.111979 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:21:40.111992 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:21:40.112003 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:21:40.112040 | orchestrator | 2026-02-03 05:21:40.112053 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-03 05:21:40.112063 | orchestrator | Tuesday 03 February 2026 05:21:30 +0000 (0:00:02.098) 0:00:34.594 ****** 2026-02-03 05:21:40.112074 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-03 05:21:40.112086 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-03 05:21:40.112097 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-03 05:21:40.112108 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-03 05:21:40.112119 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-03 05:21:40.112129 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-03 05:21:40.112140 | orchestrator | 2026-02-03 05:21:40.112151 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-03 05:21:40.112162 | orchestrator | Tuesday 03 February 2026 05:21:33 +0000 (0:00:02.983) 0:00:37.578 ****** 2026-02-03 05:21:40.112172 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:21:40.112183 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:21:40.112194 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:21:40.112205 | orchestrator | 2026-02-03 05:21:40.112216 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-03 05:21:40.112227 | orchestrator | Tuesday 03 February 2026 05:21:35 +0000 (0:00:02.422) 0:00:40.001 ****** 2026-02-03 05:21:40.112237 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:21:40.112248 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:21:40.112259 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:21:40.112272 | orchestrator | 2026-02-03 05:21:40.112286 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-03 05:21:40.112298 | orchestrator | Tuesday 03 February 2026 05:21:38 +0000 (0:00:02.467) 0:00:42.468 ****** 2026-02-03 05:21:40.112313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 05:21:40.112365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:21:40.112386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:21:40.112401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 05:21:40.112423 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:21:40.112437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 05:21:40.112479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:21:40.112501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:21:40.112516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 05:21:40.112529 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:21:40.112556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 05:21:44.501167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:21:44.501293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:21:44.501310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 05:21:44.501323 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:21:44.501336 | orchestrator | 2026-02-03 05:21:44.501348 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-03 05:21:44.501361 | orchestrator | Tuesday 03 February 2026 05:21:40 +0000 (0:00:01.815) 0:00:44.284 ****** 2026-02-03 05:21:44.501373 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 05:21:44.501385 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 05:21:44.501409 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 05:21:44.501478 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:21:44.501492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:21:44.501504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 05:21:44.501521 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:21:44.501542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:21:44.501562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 05:21:44.501637 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:21:59.567149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:21:59.567285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f', '__omit_place_holder__aefa0b9da25ac8f40e97097b85a15779f2b3dd1f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-03 05:21:59.567303 | orchestrator | 2026-02-03 05:21:59.567340 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-03 05:21:59.567363 | orchestrator | Tuesday 03 February 2026 05:21:44 +0000 (0:00:04.390) 0:00:48.675 ****** 2026-02-03 05:21:59.567383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 05:21:59.567405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 05:21:59.567472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 05:21:59.567530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:21:59.567579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:21:59.567603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:21:59.567621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:21:59.567640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:21:59.567654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:21:59.567667 | orchestrator | 2026-02-03 05:21:59.567680 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-03 05:21:59.567693 | orchestrator | Tuesday 03 February 2026 05:21:49 +0000 (0:00:05.290) 0:00:53.965 ****** 2026-02-03 05:21:59.567718 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-03 05:21:59.567731 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-03 05:21:59.567742 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-03 05:21:59.567753 | orchestrator | 2026-02-03 05:21:59.567764 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-03 05:21:59.567781 | orchestrator | Tuesday 03 February 2026 05:21:52 +0000 (0:00:02.927) 0:00:56.892 ****** 2026-02-03 05:21:59.567792 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-03 05:21:59.567803 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-03 05:21:59.567814 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-03 05:21:59.567825 | orchestrator | 2026-02-03 05:21:59.567835 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-03 05:21:59.567846 | orchestrator | Tuesday 03 February 2026 05:21:57 +0000 (0:00:04.755) 0:01:01.648 ****** 2026-02-03 05:21:59.567857 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:21:59.567869 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:21:59.567887 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:22:22.283139 | orchestrator | 2026-02-03 05:22:22.283277 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-03 05:22:22.283305 | orchestrator | Tuesday 03 February 2026 05:21:59 +0000 (0:00:02.088) 0:01:03.737 ****** 2026-02-03 05:22:22.283325 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-03 05:22:22.283345 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-03 05:22:22.283364 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-03 05:22:22.283385 | orchestrator | 2026-02-03 05:22:22.283462 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-03 05:22:22.283481 | orchestrator | Tuesday 03 February 2026 05:22:02 +0000 (0:00:03.289) 0:01:07.027 ****** 2026-02-03 05:22:22.283501 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-03 05:22:22.283522 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-03 05:22:22.283541 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-03 05:22:22.283561 | orchestrator | 2026-02-03 05:22:22.283580 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-03 05:22:22.283600 | orchestrator | Tuesday 03 February 2026 05:22:05 +0000 (0:00:02.897) 0:01:09.924 ****** 2026-02-03 05:22:22.283619 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:22:22.283637 | orchestrator | 2026-02-03 05:22:22.283655 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-03 05:22:22.283673 | orchestrator | Tuesday 03 February 2026 05:22:07 +0000 (0:00:02.077) 0:01:12.002 ****** 2026-02-03 05:22:22.283692 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-03 05:22:22.283711 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-03 05:22:22.283730 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-03 05:22:22.283750 | orchestrator | 2026-02-03 05:22:22.283770 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-03 05:22:22.283790 | orchestrator | Tuesday 03 February 2026 05:22:11 +0000 (0:00:03.759) 0:01:15.761 ****** 2026-02-03 05:22:22.283848 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-03 05:22:22.283870 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-03 05:22:22.283890 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-03 05:22:22.283910 | orchestrator | 2026-02-03 05:22:22.283930 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-03 05:22:22.283951 | orchestrator | Tuesday 03 February 2026 05:22:14 +0000 (0:00:02.743) 0:01:18.504 ****** 2026-02-03 05:22:22.283969 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:22:22.284013 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:22:22.284048 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:22:22.284068 | orchestrator | 2026-02-03 05:22:22.284088 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-03 05:22:22.284107 | orchestrator | Tuesday 03 February 2026 05:22:15 +0000 (0:00:01.413) 0:01:19.918 ****** 2026-02-03 05:22:22.284127 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:22:22.284147 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:22:22.284166 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:22:22.284184 | orchestrator | 2026-02-03 05:22:22.284203 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-03 05:22:22.284220 | orchestrator | Tuesday 03 February 2026 05:22:18 +0000 (0:00:02.299) 0:01:22.217 ****** 2026-02-03 05:22:22.284244 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 05:22:22.284331 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 05:22:22.284384 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 05:22:22.284433 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:22:22.284470 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:22:22.284507 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:22:22.284528 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:22:22.284555 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:22:22.284586 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:22:26.406885 | orchestrator | 2026-02-03 05:22:26.407004 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-03 05:22:26.407022 | orchestrator | Tuesday 03 February 2026 05:22:22 +0000 (0:00:04.234) 0:01:26.452 ****** 2026-02-03 05:22:26.407038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 05:22:26.407054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:22:26.407091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:22:26.407105 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:22:26.407119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 05:22:26.407131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:22:26.407143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:22:26.407155 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:22:26.407186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 05:22:26.407199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:22:26.407218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:22:26.407229 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:22:26.407240 | orchestrator | 2026-02-03 05:22:26.407252 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-03 05:22:26.407263 | orchestrator | Tuesday 03 February 2026 05:22:24 +0000 (0:00:01.822) 0:01:28.275 ****** 2026-02-03 05:22:26.407274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 05:22:26.407286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:22:26.407313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:22:26.407325 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:22:26.407347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 05:22:39.328848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:22:39.328968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:22:39.328986 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:22:39.329000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 05:22:39.329012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:22:39.329024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:22:39.329053 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:22:39.329065 | orchestrator | 2026-02-03 05:22:39.329077 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-03 05:22:39.329089 | orchestrator | Tuesday 03 February 2026 05:22:26 +0000 (0:00:02.304) 0:01:30.580 ****** 2026-02-03 05:22:39.329101 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-03 05:22:39.329113 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-03 05:22:39.329124 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-03 05:22:39.329135 | orchestrator | 2026-02-03 05:22:39.329146 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-03 05:22:39.329180 | orchestrator | Tuesday 03 February 2026 05:22:29 +0000 (0:00:02.658) 0:01:33.239 ****** 2026-02-03 05:22:39.329192 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-03 05:22:39.329203 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-03 05:22:39.329214 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-03 05:22:39.329225 | orchestrator | 2026-02-03 05:22:39.329251 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-03 05:22:39.329263 | orchestrator | Tuesday 03 February 2026 05:22:31 +0000 (0:00:02.926) 0:01:36.165 ****** 2026-02-03 05:22:39.329274 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-03 05:22:39.329286 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-03 05:22:39.329297 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-03 05:22:39.329307 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-03 05:22:39.329318 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:22:39.329329 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-03 05:22:39.329340 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:22:39.329351 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-03 05:22:39.329434 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:22:39.329451 | orchestrator | 2026-02-03 05:22:39.329464 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-03 05:22:39.329477 | orchestrator | Tuesday 03 February 2026 05:22:34 +0000 (0:00:02.954) 0:01:39.119 ****** 2026-02-03 05:22:39.329491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 05:22:39.329506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 05:22:39.329520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 05:22:39.329602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:22:39.329629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:22:43.234768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:22:43.234871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:22:43.234887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:22:43.234900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:22:43.234912 | orchestrator | 2026-02-03 05:22:43.234926 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-03 05:22:43.234938 | orchestrator | Tuesday 03 February 2026 05:22:39 +0000 (0:00:04.384) 0:01:43.504 ****** 2026-02-03 05:22:43.234950 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:22:43.234989 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:22:43.235001 | orchestrator | } 2026-02-03 05:22:43.235013 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:22:43.235024 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:22:43.235034 | orchestrator | } 2026-02-03 05:22:43.235045 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:22:43.235056 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:22:43.235067 | orchestrator | } 2026-02-03 05:22:43.235078 | orchestrator | 2026-02-03 05:22:43.235089 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:22:43.235115 | orchestrator | Tuesday 03 February 2026 05:22:40 +0000 (0:00:01.448) 0:01:44.953 ****** 2026-02-03 05:22:43.235128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 05:22:43.235158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:22:43.235172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:22:43.235183 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:22:43.235195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 05:22:43.235207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:22:43.235227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:22:43.235239 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:22:43.235257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 05:22:43.235270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:22:43.235291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:22:49.172418 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:22:49.172592 | orchestrator | 2026-02-03 05:22:49.172615 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-03 05:22:49.172629 | orchestrator | Tuesday 03 February 2026 05:22:43 +0000 (0:00:02.450) 0:01:47.404 ****** 2026-02-03 05:22:49.172640 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:22:49.172651 | orchestrator | 2026-02-03 05:22:49.172663 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-03 05:22:49.172674 | orchestrator | Tuesday 03 February 2026 05:22:45 +0000 (0:00:02.096) 0:01:49.500 ****** 2026-02-03 05:22:49.172690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:22:49.172735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 05:22:49.172766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:22:49.172779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 05:22:49.172813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:22:49.172826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 05:22:49.172838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:22:49.172861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 05:22:49.172880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:22:49.172894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 05:22:49.172915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:22:50.989980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 05:22:50.990094 | orchestrator | 2026-02-03 05:22:50.990103 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-03 05:22:50.990111 | orchestrator | Tuesday 03 February 2026 05:22:50 +0000 (0:00:05.009) 0:01:54.510 ****** 2026-02-03 05:22:50.990137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:22:50.990146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 05:22:50.990163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:22:50.990169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 05:22:50.990174 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:22:50.990194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:22:50.990206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 05:22:50.990212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:22:50.990217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 05:22:50.990222 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:22:50.990231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:22:50.990237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-03 05:22:50.990246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:23:06.896116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-03 05:23:06.896230 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:06.896248 | orchestrator | 2026-02-03 05:23:06.896261 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-03 05:23:06.896274 | orchestrator | Tuesday 03 February 2026 05:22:52 +0000 (0:00:01.861) 0:01:56.372 ****** 2026-02-03 05:23:06.896286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:06.896300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:06.896313 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:06.896324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:06.896402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:06.896414 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:06.896437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:06.896449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:06.896460 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:06.896471 | orchestrator | 2026-02-03 05:23:06.896483 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-03 05:23:06.896494 | orchestrator | Tuesday 03 February 2026 05:22:54 +0000 (0:00:02.427) 0:01:58.800 ****** 2026-02-03 05:23:06.896505 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:23:06.896517 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:23:06.896528 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:23:06.896539 | orchestrator | 2026-02-03 05:23:06.896550 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-03 05:23:06.896561 | orchestrator | Tuesday 03 February 2026 05:22:57 +0000 (0:00:02.454) 0:02:01.255 ****** 2026-02-03 05:23:06.896571 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:23:06.896582 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:23:06.896593 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:23:06.896604 | orchestrator | 2026-02-03 05:23:06.896615 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-03 05:23:06.896628 | orchestrator | Tuesday 03 February 2026 05:23:00 +0000 (0:00:03.026) 0:02:04.281 ****** 2026-02-03 05:23:06.896664 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:23:06.896677 | orchestrator | 2026-02-03 05:23:06.896690 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-03 05:23:06.896702 | orchestrator | Tuesday 03 February 2026 05:23:01 +0000 (0:00:01.772) 0:02:06.054 ****** 2026-02-03 05:23:06.896739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:23:06.896757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:23:06.896773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:23:06.896793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:23:06.896806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:23:06.896827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:23:06.896848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:23:08.665406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:23:08.665512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:23:08.665528 | orchestrator | 2026-02-03 05:23:08.665541 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-03 05:23:08.665554 | orchestrator | Tuesday 03 February 2026 05:23:06 +0000 (0:00:05.012) 0:02:11.066 ****** 2026-02-03 05:23:08.665569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:23:08.665608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:23:08.665621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:23:08.665700 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:08.665773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:23:08.665809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:23:08.665846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:23:08.665868 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:08.665892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:23:08.665914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-03 05:23:08.665939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:23:26.240671 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:26.240771 | orchestrator | 2026-02-03 05:23:26.240782 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-03 05:23:26.240790 | orchestrator | Tuesday 03 February 2026 05:23:08 +0000 (0:00:01.777) 0:02:12.844 ****** 2026-02-03 05:23:26.240798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:26.240821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:26.240897 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:26.240906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:26.240914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:26.240922 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:26.240929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:26.240937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:26.240944 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:26.240951 | orchestrator | 2026-02-03 05:23:26.240959 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-03 05:23:26.240967 | orchestrator | Tuesday 03 February 2026 05:23:10 +0000 (0:00:01.928) 0:02:14.773 ****** 2026-02-03 05:23:26.240974 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:23:26.240982 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:23:26.240990 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:23:26.240997 | orchestrator | 2026-02-03 05:23:26.241004 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-03 05:23:26.241011 | orchestrator | Tuesday 03 February 2026 05:23:13 +0000 (0:00:02.459) 0:02:17.233 ****** 2026-02-03 05:23:26.241018 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:23:26.241026 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:23:26.241033 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:23:26.241040 | orchestrator | 2026-02-03 05:23:26.241047 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-03 05:23:26.241054 | orchestrator | Tuesday 03 February 2026 05:23:16 +0000 (0:00:03.160) 0:02:20.394 ****** 2026-02-03 05:23:26.241061 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:26.241069 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:26.241076 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:26.241083 | orchestrator | 2026-02-03 05:23:26.241090 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-03 05:23:26.241098 | orchestrator | Tuesday 03 February 2026 05:23:17 +0000 (0:00:01.520) 0:02:21.914 ****** 2026-02-03 05:23:26.241105 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:23:26.241112 | orchestrator | 2026-02-03 05:23:26.241119 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-03 05:23:26.241126 | orchestrator | Tuesday 03 February 2026 05:23:19 +0000 (0:00:01.809) 0:02:23.724 ****** 2026-02-03 05:23:26.241135 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-03 05:23:26.241172 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-03 05:23:26.241182 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-03 05:23:26.241190 | orchestrator | 2026-02-03 05:23:26.241198 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-03 05:23:26.241207 | orchestrator | Tuesday 03 February 2026 05:23:23 +0000 (0:00:03.825) 0:02:27.550 ****** 2026-02-03 05:23:26.241216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-03 05:23:26.241225 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:26.241234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-03 05:23:26.241242 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:26.241256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-03 05:23:39.949619 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:39.949694 | orchestrator | 2026-02-03 05:23:39.949700 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-03 05:23:39.949706 | orchestrator | Tuesday 03 February 2026 05:23:26 +0000 (0:00:02.866) 0:02:30.416 ****** 2026-02-03 05:23:39.949725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 05:23:39.949732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 05:23:39.949738 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:39.949742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 05:23:39.949746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 05:23:39.949750 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:39.949754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 05:23:39.949758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-03 05:23:39.949762 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:39.949766 | orchestrator | 2026-02-03 05:23:39.949771 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-03 05:23:39.949775 | orchestrator | Tuesday 03 February 2026 05:23:29 +0000 (0:00:03.076) 0:02:33.492 ****** 2026-02-03 05:23:39.949793 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:39.949797 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:39.949801 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:39.949805 | orchestrator | 2026-02-03 05:23:39.949809 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-03 05:23:39.949813 | orchestrator | Tuesday 03 February 2026 05:23:30 +0000 (0:00:01.626) 0:02:35.119 ****** 2026-02-03 05:23:39.949817 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:39.949821 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:39.949825 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:39.949829 | orchestrator | 2026-02-03 05:23:39.949833 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-03 05:23:39.949837 | orchestrator | Tuesday 03 February 2026 05:23:33 +0000 (0:00:02.626) 0:02:37.745 ****** 2026-02-03 05:23:39.949841 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:23:39.949845 | orchestrator | 2026-02-03 05:23:39.949849 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-03 05:23:39.949853 | orchestrator | Tuesday 03 February 2026 05:23:35 +0000 (0:00:01.958) 0:02:39.703 ****** 2026-02-03 05:23:39.949881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:23:39.949894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:23:39.949902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 05:23:39.949910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:23:39.949924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 05:23:39.949940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:23:42.118981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 05:23:42.119086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 05:23:42.119105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:23:42.119146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:23:42.119159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 05:23:42.119206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 05:23:42.119220 | orchestrator | 2026-02-03 05:23:42.119234 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-03 05:23:42.119246 | orchestrator | Tuesday 03 February 2026 05:23:41 +0000 (0:00:05.633) 0:02:45.336 ****** 2026-02-03 05:23:42.119260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:23:42.119282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:23:42.119341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 05:23:42.119354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 05:23:42.119365 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:42.119395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:23:54.273165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:23:54.273405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 05:23:54.273429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 05:23:54.273446 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:54.273465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:23:54.273483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:23:54.273539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-03 05:23:54.273590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-03 05:23:54.273606 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:54.273621 | orchestrator | 2026-02-03 05:23:54.273637 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-03 05:23:54.273653 | orchestrator | Tuesday 03 February 2026 05:23:43 +0000 (0:00:02.123) 0:02:47.460 ****** 2026-02-03 05:23:54.273669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:54.273685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:54.273702 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:54.273716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:54.273750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:54.273765 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:54.273780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:54.273795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:23:54.273810 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:54.273825 | orchestrator | 2026-02-03 05:23:54.273840 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-03 05:23:54.273877 | orchestrator | Tuesday 03 February 2026 05:23:45 +0000 (0:00:02.209) 0:02:49.670 ****** 2026-02-03 05:23:54.273892 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:23:54.273908 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:23:54.273922 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:23:54.273937 | orchestrator | 2026-02-03 05:23:54.273951 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-03 05:23:54.273965 | orchestrator | Tuesday 03 February 2026 05:23:47 +0000 (0:00:02.435) 0:02:52.105 ****** 2026-02-03 05:23:54.273980 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:23:54.273994 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:23:54.274008 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:23:54.274094 | orchestrator | 2026-02-03 05:23:54.274110 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-03 05:23:54.274123 | orchestrator | Tuesday 03 February 2026 05:23:51 +0000 (0:00:03.121) 0:02:55.226 ****** 2026-02-03 05:23:54.274144 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:54.274157 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:54.274170 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:23:54.274183 | orchestrator | 2026-02-03 05:23:54.274196 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-03 05:23:54.274209 | orchestrator | Tuesday 03 February 2026 05:23:52 +0000 (0:00:01.671) 0:02:56.898 ****** 2026-02-03 05:23:54.274222 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:23:54.274235 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:23:54.274258 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:24:00.175633 | orchestrator | 2026-02-03 05:24:00.175753 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-03 05:24:00.175771 | orchestrator | Tuesday 03 February 2026 05:23:54 +0000 (0:00:01.551) 0:02:58.450 ****** 2026-02-03 05:24:00.175783 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:24:00.175794 | orchestrator | 2026-02-03 05:24:00.175805 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-03 05:24:00.175816 | orchestrator | Tuesday 03 February 2026 05:23:56 +0000 (0:00:02.204) 0:03:00.654 ****** 2026-02-03 05:24:00.175833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:24:00.175851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 05:24:00.175864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 05:24:00.175894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 05:24:00.175931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 05:24:00.175964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:24:00.175977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 05:24:00.175989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:24:00.176002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 05:24:00.176019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 05:24:00.176043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 05:24:00.176063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 05:24:02.316326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:24:02.316427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 05:24:02.316445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:24:02.316499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 05:24:02.316514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 05:24:02.316546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 05:24:02.316559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 05:24:02.316571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:24:02.316582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 05:24:02.316604 | orchestrator | 2026-02-03 05:24:02.316617 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-03 05:24:02.316629 | orchestrator | Tuesday 03 February 2026 05:24:01 +0000 (0:00:05.146) 0:03:05.801 ****** 2026-02-03 05:24:02.316646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:24:02.316659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 05:24:02.316680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:24:03.659641 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:24:03.659674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 05:24:03.659688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 05:24:03.659758 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:24:03.659778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:24:20.054139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-03 05:24:20.054331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-03 05:24:20.054366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-03 05:24:20.054379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-03 05:24:20.054391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:24:20.054403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-03 05:24:20.054415 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:24:20.054429 | orchestrator | 2026-02-03 05:24:20.054441 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-03 05:24:20.054454 | orchestrator | Tuesday 03 February 2026 05:24:03 +0000 (0:00:02.040) 0:03:07.842 ****** 2026-02-03 05:24:20.054483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:24:20.054498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:24:20.054521 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:24:20.054533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:24:20.054544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:24:20.054556 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:24:20.054567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:24:20.054579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:24:20.054590 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:24:20.054601 | orchestrator | 2026-02-03 05:24:20.054612 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-03 05:24:20.054625 | orchestrator | Tuesday 03 February 2026 05:24:05 +0000 (0:00:02.261) 0:03:10.104 ****** 2026-02-03 05:24:20.054639 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:24:20.054653 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:24:20.054666 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:24:20.054679 | orchestrator | 2026-02-03 05:24:20.054692 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-03 05:24:20.054710 | orchestrator | Tuesday 03 February 2026 05:24:08 +0000 (0:00:02.425) 0:03:12.530 ****** 2026-02-03 05:24:20.054723 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:24:20.054737 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:24:20.054749 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:24:20.054763 | orchestrator | 2026-02-03 05:24:20.054776 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-03 05:24:20.054788 | orchestrator | Tuesday 03 February 2026 05:24:11 +0000 (0:00:03.072) 0:03:15.602 ****** 2026-02-03 05:24:20.054802 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:24:20.054816 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:24:20.054829 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:24:20.054842 | orchestrator | 2026-02-03 05:24:20.054855 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-03 05:24:20.054868 | orchestrator | Tuesday 03 February 2026 05:24:13 +0000 (0:00:01.603) 0:03:17.206 ****** 2026-02-03 05:24:20.054881 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:24:20.054893 | orchestrator | 2026-02-03 05:24:20.054906 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-03 05:24:20.054919 | orchestrator | Tuesday 03 February 2026 05:24:14 +0000 (0:00:01.964) 0:03:19.171 ****** 2026-02-03 05:24:20.054945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 05:24:21.231093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 05:24:21.231177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 05:24:21.231277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 05:24:21.231292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-03 05:24:21.231315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 05:24:24.994685 | orchestrator | 2026-02-03 05:24:25.291520 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-03 05:24:25.291593 | orchestrator | Tuesday 03 February 2026 05:24:21 +0000 (0:00:06.244) 0:03:25.415 ****** 2026-02-03 05:24:25.291630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 05:24:25.291672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 05:24:25.291683 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:24:25.291724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 05:24:25.291741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 05:24:25.291750 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:24:25.291770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-03 05:24:45.538896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-03 05:24:45.539033 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:24:45.539057 | orchestrator | 2026-02-03 05:24:45.539072 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-03 05:24:45.539085 | orchestrator | Tuesday 03 February 2026 05:24:26 +0000 (0:00:05.063) 0:03:30.480 ****** 2026-02-03 05:24:45.539093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 05:24:45.539114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 05:24:45.539123 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:24:45.539130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 05:24:45.539155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 05:24:45.539170 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:24:45.539177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 05:24:45.539185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-03 05:24:45.539192 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:24:45.539198 | orchestrator | 2026-02-03 05:24:45.539205 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-03 05:24:45.539212 | orchestrator | Tuesday 03 February 2026 05:24:31 +0000 (0:00:05.061) 0:03:35.541 ****** 2026-02-03 05:24:45.539273 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:24:45.539282 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:24:45.539288 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:24:45.539295 | orchestrator | 2026-02-03 05:24:45.539302 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-03 05:24:45.539309 | orchestrator | Tuesday 03 February 2026 05:24:33 +0000 (0:00:02.388) 0:03:37.930 ****** 2026-02-03 05:24:45.539315 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:24:45.539322 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:24:45.539329 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:24:45.539335 | orchestrator | 2026-02-03 05:24:45.539342 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-03 05:24:45.539348 | orchestrator | Tuesday 03 February 2026 05:24:36 +0000 (0:00:03.138) 0:03:41.068 ****** 2026-02-03 05:24:45.539355 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:24:45.539362 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:24:45.539368 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:24:45.539375 | orchestrator | 2026-02-03 05:24:45.539381 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-03 05:24:45.539388 | orchestrator | Tuesday 03 February 2026 05:24:38 +0000 (0:00:01.711) 0:03:42.780 ****** 2026-02-03 05:24:45.539395 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:24:45.539401 | orchestrator | 2026-02-03 05:24:45.539408 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-03 05:24:45.539415 | orchestrator | Tuesday 03 February 2026 05:24:40 +0000 (0:00:01.835) 0:03:44.616 ****** 2026-02-03 05:24:45.539428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:24:45.539448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:25:04.004799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:25:04.004889 | orchestrator | 2026-02-03 05:25:04.004900 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-03 05:25:04.004908 | orchestrator | Tuesday 03 February 2026 05:24:45 +0000 (0:00:05.104) 0:03:49.721 ****** 2026-02-03 05:25:04.004917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:25:04.004924 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:04.004932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:25:04.004938 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:04.004959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:25:04.004985 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:04.004991 | orchestrator | 2026-02-03 05:25:04.004998 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-03 05:25:04.005004 | orchestrator | Tuesday 03 February 2026 05:24:47 +0000 (0:00:01.979) 0:03:51.701 ****** 2026-02-03 05:25:04.005012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:25:04.005021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:25:04.005028 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:04.005052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:25:04.005059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:25:04.005065 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:04.005071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:25:04.005078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:25:04.005084 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:04.005091 | orchestrator | 2026-02-03 05:25:04.005097 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-03 05:25:04.005103 | orchestrator | Tuesday 03 February 2026 05:24:49 +0000 (0:00:01.506) 0:03:53.207 ****** 2026-02-03 05:25:04.005107 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:25:04.005111 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:25:04.005115 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:25:04.005119 | orchestrator | 2026-02-03 05:25:04.005123 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-03 05:25:04.005126 | orchestrator | Tuesday 03 February 2026 05:24:51 +0000 (0:00:02.395) 0:03:55.603 ****** 2026-02-03 05:25:04.005131 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:25:04.005135 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:25:04.005138 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:25:04.005142 | orchestrator | 2026-02-03 05:25:04.005146 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-03 05:25:04.005150 | orchestrator | Tuesday 03 February 2026 05:24:54 +0000 (0:00:03.226) 0:03:58.830 ****** 2026-02-03 05:25:04.005153 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:04.005157 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:04.005161 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:04.005170 | orchestrator | 2026-02-03 05:25:04.005174 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-03 05:25:04.005178 | orchestrator | Tuesday 03 February 2026 05:24:56 +0000 (0:00:01.591) 0:04:00.421 ****** 2026-02-03 05:25:04.005181 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:25:04.005185 | orchestrator | 2026-02-03 05:25:04.005189 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-03 05:25:04.005193 | orchestrator | Tuesday 03 February 2026 05:24:58 +0000 (0:00:02.158) 0:04:02.580 ****** 2026-02-03 05:25:04.005245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 05:25:05.874084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 05:25:05.874315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-03 05:25:05.874355 | orchestrator | 2026-02-03 05:25:05.874368 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-03 05:25:05.874380 | orchestrator | Tuesday 03 February 2026 05:25:03 +0000 (0:00:05.602) 0:04:08.182 ****** 2026-02-03 05:25:05.874398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 05:25:05.874417 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:05.874438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 05:25:15.275545 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:15.275698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-03 05:25:15.275721 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:15.275734 | orchestrator | 2026-02-03 05:25:15.275746 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-03 05:25:15.275758 | orchestrator | Tuesday 03 February 2026 05:25:05 +0000 (0:00:01.869) 0:04:10.052 ****** 2026-02-03 05:25:15.275780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-03 05:25:15.275793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 05:25:15.275807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-03 05:25:15.275820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 05:25:15.275839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-03 05:25:15.275852 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:15.275883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-03 05:25:15.275895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 05:25:15.275907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-03 05:25:15.275919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 05:25:15.275930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-03 05:25:15.275942 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:15.275953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-03 05:25:15.275965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 05:25:15.275984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-03 05:25:15.275996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-03 05:25:15.276007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-03 05:25:15.276019 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:15.276030 | orchestrator | 2026-02-03 05:25:15.276041 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-03 05:25:15.276053 | orchestrator | Tuesday 03 February 2026 05:25:08 +0000 (0:00:02.244) 0:04:12.296 ****** 2026-02-03 05:25:15.276067 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:25:15.276088 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:25:15.276100 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:25:15.276114 | orchestrator | 2026-02-03 05:25:15.276127 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-03 05:25:15.276139 | orchestrator | Tuesday 03 February 2026 05:25:10 +0000 (0:00:02.343) 0:04:14.640 ****** 2026-02-03 05:25:15.276151 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:25:15.276164 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:25:15.276177 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:25:15.276230 | orchestrator | 2026-02-03 05:25:15.276242 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-03 05:25:15.276253 | orchestrator | Tuesday 03 February 2026 05:25:13 +0000 (0:00:03.052) 0:04:17.692 ****** 2026-02-03 05:25:15.276264 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:15.276275 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:15.276286 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:15.276297 | orchestrator | 2026-02-03 05:25:15.276308 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-03 05:25:15.276319 | orchestrator | Tuesday 03 February 2026 05:25:15 +0000 (0:00:01.522) 0:04:19.215 ****** 2026-02-03 05:25:15.276337 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:26.450584 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:26.450732 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:26.450759 | orchestrator | 2026-02-03 05:25:26.450782 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-03 05:25:26.450800 | orchestrator | Tuesday 03 February 2026 05:25:16 +0000 (0:00:01.601) 0:04:20.816 ****** 2026-02-03 05:25:26.450812 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:25:26.450823 | orchestrator | 2026-02-03 05:25:26.450834 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-03 05:25:26.450844 | orchestrator | Tuesday 03 February 2026 05:25:18 +0000 (0:00:02.303) 0:04:23.120 ****** 2026-02-03 05:25:26.450860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-03 05:25:26.450897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 05:25:26.450911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 05:25:26.450946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-03 05:25:26.450980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 05:25:26.450994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 05:25:26.451011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-03 05:25:26.451024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 05:25:26.451045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 05:25:26.451056 | orchestrator | 2026-02-03 05:25:26.451068 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-03 05:25:26.451079 | orchestrator | Tuesday 03 February 2026 05:25:24 +0000 (0:00:05.126) 0:04:28.247 ****** 2026-02-03 05:25:26.451099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-03 05:25:28.187969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 05:25:28.188058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 05:25:28.188066 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:28.188073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-03 05:25:28.188091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 05:25:28.188095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 05:25:28.188100 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:28.188116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-03 05:25:28.188127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-03 05:25:28.188137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-03 05:25:28.188143 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:28.188149 | orchestrator | 2026-02-03 05:25:28.188157 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-03 05:25:28.188165 | orchestrator | Tuesday 03 February 2026 05:25:26 +0000 (0:00:02.370) 0:04:30.618 ****** 2026-02-03 05:25:28.188208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-03 05:25:28.188218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-03 05:25:28.188228 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:28.188234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-03 05:25:28.188239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-03 05:25:28.188245 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:28.188251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-03 05:25:28.188271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-03 05:25:28.188280 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:28.188287 | orchestrator | 2026-02-03 05:25:28.188293 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-03 05:25:28.188305 | orchestrator | Tuesday 03 February 2026 05:25:28 +0000 (0:00:01.743) 0:04:32.362 ****** 2026-02-03 05:25:45.574392 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:25:45.574497 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:25:45.574514 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:25:45.574526 | orchestrator | 2026-02-03 05:25:45.574539 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-03 05:25:45.574552 | orchestrator | Tuesday 03 February 2026 05:25:30 +0000 (0:00:02.552) 0:04:34.914 ****** 2026-02-03 05:25:45.574564 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:25:45.574575 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:25:45.574586 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:25:45.574598 | orchestrator | 2026-02-03 05:25:45.574610 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-03 05:25:45.574648 | orchestrator | Tuesday 03 February 2026 05:25:34 +0000 (0:00:03.487) 0:04:38.402 ****** 2026-02-03 05:25:45.574661 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:25:45.574673 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:25:45.574684 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:25:45.574695 | orchestrator | 2026-02-03 05:25:45.574706 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-03 05:25:45.574718 | orchestrator | Tuesday 03 February 2026 05:25:35 +0000 (0:00:01.478) 0:04:39.881 ****** 2026-02-03 05:25:45.574730 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:25:45.574741 | orchestrator | 2026-02-03 05:25:45.574770 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-03 05:25:45.574783 | orchestrator | Tuesday 03 February 2026 05:25:37 +0000 (0:00:02.137) 0:04:42.019 ****** 2026-02-03 05:25:45.574801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:25:45.574817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:25:45.574831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:25:45.574863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:25:45.574895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:25:45.574908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:25:45.574920 | orchestrator | 2026-02-03 05:25:45.574933 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-03 05:25:45.574946 | orchestrator | Tuesday 03 February 2026 05:25:43 +0000 (0:00:05.698) 0:04:47.717 ****** 2026-02-03 05:25:45.574960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:25:45.574981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:26:00.315432 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:00.315564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:26:00.315588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:26:00.315601 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:26:00.315614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:26:00.315627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:26:00.315662 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:26:00.315675 | orchestrator | 2026-02-03 05:26:00.315687 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-03 05:26:00.315700 | orchestrator | Tuesday 03 February 2026 05:25:45 +0000 (0:00:02.036) 0:04:49.753 ****** 2026-02-03 05:26:00.315729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:00.315745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:00.315758 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:00.315770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:00.315787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:00.315799 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:26:00.315811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:00.315822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:00.315834 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:26:00.315845 | orchestrator | 2026-02-03 05:26:00.315856 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-03 05:26:00.315867 | orchestrator | Tuesday 03 February 2026 05:25:47 +0000 (0:00:02.110) 0:04:51.863 ****** 2026-02-03 05:26:00.315879 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:26:00.315890 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:26:00.315901 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:26:00.315912 | orchestrator | 2026-02-03 05:26:00.315924 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-03 05:26:00.315935 | orchestrator | Tuesday 03 February 2026 05:25:50 +0000 (0:00:03.306) 0:04:55.170 ****** 2026-02-03 05:26:00.315946 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:26:00.315959 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:26:00.315972 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:26:00.315985 | orchestrator | 2026-02-03 05:26:00.315998 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-03 05:26:00.316011 | orchestrator | Tuesday 03 February 2026 05:25:54 +0000 (0:00:03.115) 0:04:58.285 ****** 2026-02-03 05:26:00.316026 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:26:00.316038 | orchestrator | 2026-02-03 05:26:00.316070 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-03 05:26:00.316095 | orchestrator | Tuesday 03 February 2026 05:25:56 +0000 (0:00:02.348) 0:05:00.634 ****** 2026-02-03 05:26:00.316109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:26:00.316135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:26:00.316200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:26:02.230273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 05:26:02.230393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 05:26:02.231117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:26:02.231180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 05:26:02.231194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 05:26:02.231240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:26:02.231255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:26:02.231267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 05:26:02.231286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 05:26:02.231298 | orchestrator | 2026-02-03 05:26:02.231312 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-03 05:26:02.231324 | orchestrator | Tuesday 03 February 2026 05:26:01 +0000 (0:00:05.045) 0:05:05.680 ****** 2026-02-03 05:26:02.231337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:26:02.231350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:26:02.231375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 05:26:05.580381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 05:26:05.580486 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:05.580505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:26:05.580542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:26:05.580556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 05:26:05.580568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 05:26:05.580580 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:26:05.580612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:26:05.580625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:26:05.580646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-03 05:26:05.580657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-03 05:26:05.580669 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:26:05.580681 | orchestrator | 2026-02-03 05:26:05.580693 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-03 05:26:05.580705 | orchestrator | Tuesday 03 February 2026 05:26:03 +0000 (0:00:01.901) 0:05:07.581 ****** 2026-02-03 05:26:05.581473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:05.581504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:05.581519 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:05.581530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:05.581542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:05.581553 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:26:05.581564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:05.581589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:26:22.040642 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:26:22.040765 | orchestrator | 2026-02-03 05:26:22.040779 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-03 05:26:22.040790 | orchestrator | Tuesday 03 February 2026 05:26:05 +0000 (0:00:02.177) 0:05:09.759 ****** 2026-02-03 05:26:22.040799 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:26:22.040809 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:26:22.040818 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:26:22.040826 | orchestrator | 2026-02-03 05:26:22.040835 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-03 05:26:22.040844 | orchestrator | Tuesday 03 February 2026 05:26:07 +0000 (0:00:02.406) 0:05:12.165 ****** 2026-02-03 05:26:22.040853 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:26:22.040862 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:26:22.040871 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:26:22.040879 | orchestrator | 2026-02-03 05:26:22.040888 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-03 05:26:22.040897 | orchestrator | Tuesday 03 February 2026 05:26:11 +0000 (0:00:03.231) 0:05:15.396 ****** 2026-02-03 05:26:22.040905 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:26:22.040914 | orchestrator | 2026-02-03 05:26:22.040923 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-03 05:26:22.040931 | orchestrator | Tuesday 03 February 2026 05:26:13 +0000 (0:00:02.724) 0:05:18.121 ****** 2026-02-03 05:26:22.040940 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 05:26:22.040950 | orchestrator | 2026-02-03 05:26:22.040959 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-03 05:26:22.040968 | orchestrator | Tuesday 03 February 2026 05:26:18 +0000 (0:00:04.261) 0:05:22.382 ****** 2026-02-03 05:26:22.040995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:26:22.041009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 05:26:22.041027 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:22.041056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:26:22.041071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 05:26:22.041081 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:26:22.041097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:26:22.041156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 05:26:31.690001 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:26:31.690208 | orchestrator | 2026-02-03 05:26:31.690228 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-03 05:26:31.690242 | orchestrator | Tuesday 03 February 2026 05:26:22 +0000 (0:00:03.820) 0:05:26.202 ****** 2026-02-03 05:26:31.690277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:26:31.690296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 05:26:31.690334 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:31.690377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:26:31.690401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 05:26:31.690419 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:26:31.690445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:26:31.690478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-03 05:26:31.690498 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:26:31.690516 | orchestrator | 2026-02-03 05:26:31.690535 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-03 05:26:31.690554 | orchestrator | Tuesday 03 February 2026 05:26:26 +0000 (0:00:04.073) 0:05:30.276 ****** 2026-02-03 05:26:31.690589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 05:26:44.807016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 05:26:44.807189 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:26:44.807228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 05:26:44.807243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 05:26:44.807278 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:44.807291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 05:26:44.807303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-03 05:26:44.807314 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:26:44.807326 | orchestrator | 2026-02-03 05:26:44.807338 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-03 05:26:44.807350 | orchestrator | Tuesday 03 February 2026 05:26:31 +0000 (0:00:05.590) 0:05:35.866 ****** 2026-02-03 05:26:44.807361 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:26:44.807374 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:26:44.807384 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:26:44.807395 | orchestrator | 2026-02-03 05:26:44.807406 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-03 05:26:44.807417 | orchestrator | Tuesday 03 February 2026 05:26:34 +0000 (0:00:03.214) 0:05:39.080 ****** 2026-02-03 05:26:44.807428 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:44.807439 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:26:44.807450 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:26:44.807460 | orchestrator | 2026-02-03 05:26:44.807471 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-03 05:26:44.807482 | orchestrator | Tuesday 03 February 2026 05:26:37 +0000 (0:00:03.060) 0:05:42.141 ****** 2026-02-03 05:26:44.807493 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:44.807504 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:26:44.807515 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:26:44.807526 | orchestrator | 2026-02-03 05:26:44.807537 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-03 05:26:44.807548 | orchestrator | Tuesday 03 February 2026 05:26:39 +0000 (0:00:01.486) 0:05:43.628 ****** 2026-02-03 05:26:44.807577 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:26:44.807588 | orchestrator | 2026-02-03 05:26:44.807599 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-03 05:26:44.807610 | orchestrator | Tuesday 03 February 2026 05:26:41 +0000 (0:00:02.411) 0:05:46.040 ****** 2026-02-03 05:26:44.807622 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-03 05:26:44.807649 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-03 05:26:44.807661 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-03 05:26:44.807672 | orchestrator | 2026-02-03 05:26:44.807683 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-03 05:26:44.807695 | orchestrator | Tuesday 03 February 2026 05:26:44 +0000 (0:00:02.636) 0:05:48.677 ****** 2026-02-03 05:26:44.807706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-03 05:26:44.807718 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:26:44.807737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-03 05:27:01.176210 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:01.176327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-03 05:27:01.176371 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:27:01.176383 | orchestrator | 2026-02-03 05:27:01.176396 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-03 05:27:01.176408 | orchestrator | Tuesday 03 February 2026 05:26:46 +0000 (0:00:02.104) 0:05:50.781 ****** 2026-02-03 05:27:01.176436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-03 05:27:01.176449 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:27:01.176461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-03 05:27:01.176472 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:01.176483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-03 05:27:01.176494 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:27:01.176504 | orchestrator | 2026-02-03 05:27:01.176516 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-03 05:27:01.176527 | orchestrator | Tuesday 03 February 2026 05:26:48 +0000 (0:00:01.666) 0:05:52.448 ****** 2026-02-03 05:27:01.176539 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:27:01.176550 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:01.176560 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:27:01.176571 | orchestrator | 2026-02-03 05:27:01.176582 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-03 05:27:01.176593 | orchestrator | Tuesday 03 February 2026 05:26:49 +0000 (0:00:01.606) 0:05:54.055 ****** 2026-02-03 05:27:01.176603 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:27:01.176614 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:01.176625 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:27:01.176636 | orchestrator | 2026-02-03 05:27:01.176646 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-03 05:27:01.176657 | orchestrator | Tuesday 03 February 2026 05:26:52 +0000 (0:00:02.621) 0:05:56.676 ****** 2026-02-03 05:27:01.176668 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:27:01.176679 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:01.176690 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:27:01.176700 | orchestrator | 2026-02-03 05:27:01.176712 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-03 05:27:01.176726 | orchestrator | Tuesday 03 February 2026 05:26:54 +0000 (0:00:01.780) 0:05:58.457 ****** 2026-02-03 05:27:01.176739 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:27:01.176753 | orchestrator | 2026-02-03 05:27:01.176765 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-03 05:27:01.176778 | orchestrator | Tuesday 03 February 2026 05:26:56 +0000 (0:00:02.189) 0:06:00.646 ****** 2026-02-03 05:27:01.176812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:27:01.176840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:01.176861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-03 05:27:01.176877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-03 05:27:01.176892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:01.176921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:01.514708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:01.514828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 05:27:01.514847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:01.514861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:01.514875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-03 05:27:01.514922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:01.514969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:01.515004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 05:27:01.515020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:01.515032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:27:01.515052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:01.515075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-03 05:27:01.623919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-03 05:27:01.624023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:01.624041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:01.624076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:27:01.624117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:01.624157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:01.624171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 05:27:01.624183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-03 05:27:01.624203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:01.624215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-03 05:27:01.624239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:02.956956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:02.957058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-03 05:27:02.957133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:02.957149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:02.957170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:02.957189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:02.957234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 05:27:02.957327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 05:27:02.957359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:02.957372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:02.957385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:02.957506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-03 05:27:02.957535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:04.144826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:04.144957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 05:27:04.144977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:04.144991 | orchestrator | 2026-02-03 05:27:04.145003 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-03 05:27:04.145015 | orchestrator | Tuesday 03 February 2026 05:27:02 +0000 (0:00:06.492) 0:06:07.139 ****** 2026-02-03 05:27:04.145045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:27:04.145082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:04.145187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-03 05:27:04.145223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-03 05:27:04.145244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:04.145272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:04.145308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:27:04.239174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:04.239294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:04.239315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 05:27:04.239348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-03 05:27:04.239364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:04.239417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-03 05:27:04.239433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:04.239445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:04.239458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-03 05:27:04.239476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:04.239490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:04.239528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:04.312720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 05:27:04.312814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:04.312830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:04.312861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 05:27:04.312894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:27:04.312929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:04.312942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:04.312954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:04.312979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-03 05:27:04.312992 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:27:04.313006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-03 05:27:04.313033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:05.687915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-03 05:27:05.688022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:05.688042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:05.688073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 05:27:05.688169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:05.688222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:05.688243 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:05.688264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:05.688277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-03 05:27:05.688296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:05.688318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:05.688330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-03 05:27:05.688351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-03 05:27:21.830013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-03 05:27:21.830531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-03 05:27:21.830588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-03 05:27:21.830631 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:27:21.830647 | orchestrator | 2026-02-03 05:27:21.830673 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-03 05:27:21.830686 | orchestrator | Tuesday 03 February 2026 05:27:05 +0000 (0:00:02.728) 0:06:09.867 ****** 2026-02-03 05:27:21.830710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:27:21.830726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:27:21.830739 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:27:21.830751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:27:21.830762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:27:21.830773 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:21.830785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:27:21.830817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:27:21.830828 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:27:21.830840 | orchestrator | 2026-02-03 05:27:21.830851 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-03 05:27:21.830862 | orchestrator | Tuesday 03 February 2026 05:27:08 +0000 (0:00:03.195) 0:06:13.063 ****** 2026-02-03 05:27:21.830873 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:27:21.830885 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:27:21.830896 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:27:21.830907 | orchestrator | 2026-02-03 05:27:21.830918 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-03 05:27:21.830929 | orchestrator | Tuesday 03 February 2026 05:27:11 +0000 (0:00:02.434) 0:06:15.498 ****** 2026-02-03 05:27:21.830940 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:27:21.830950 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:27:21.830961 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:27:21.830972 | orchestrator | 2026-02-03 05:27:21.830982 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-03 05:27:21.830993 | orchestrator | Tuesday 03 February 2026 05:27:14 +0000 (0:00:03.119) 0:06:18.617 ****** 2026-02-03 05:27:21.831004 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:27:21.831015 | orchestrator | 2026-02-03 05:27:21.831026 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-03 05:27:21.831037 | orchestrator | Tuesday 03 February 2026 05:27:16 +0000 (0:00:02.496) 0:06:21.114 ****** 2026-02-03 05:27:21.831050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-03 05:27:21.831106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-03 05:27:21.831145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-03 05:27:40.398801 | orchestrator | 2026-02-03 05:27:40.398898 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-03 05:27:40.398911 | orchestrator | Tuesday 03 February 2026 05:27:21 +0000 (0:00:04.891) 0:06:26.006 ****** 2026-02-03 05:27:40.398921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-03 05:27:40.398955 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:27:40.398977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-03 05:27:40.398985 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:40.398992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-03 05:27:40.398999 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:27:40.399005 | orchestrator | 2026-02-03 05:27:40.399012 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-03 05:27:40.399019 | orchestrator | Tuesday 03 February 2026 05:27:23 +0000 (0:00:01.678) 0:06:27.684 ****** 2026-02-03 05:27:40.399027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:27:40.399051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:27:40.399091 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:27:40.399099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:27:40.399107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:27:40.399121 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:40.399128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:27:40.399135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:27:40.399142 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:27:40.399149 | orchestrator | 2026-02-03 05:27:40.399155 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-03 05:27:40.399161 | orchestrator | Tuesday 03 February 2026 05:27:25 +0000 (0:00:02.077) 0:06:29.761 ****** 2026-02-03 05:27:40.399168 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:27:40.399176 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:27:40.399183 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:27:40.399189 | orchestrator | 2026-02-03 05:27:40.399193 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-03 05:27:40.399197 | orchestrator | Tuesday 03 February 2026 05:27:27 +0000 (0:00:02.368) 0:06:32.129 ****** 2026-02-03 05:27:40.399201 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:27:40.399205 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:27:40.399208 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:27:40.399212 | orchestrator | 2026-02-03 05:27:40.399216 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-03 05:27:40.399220 | orchestrator | Tuesday 03 February 2026 05:27:31 +0000 (0:00:03.531) 0:06:35.661 ****** 2026-02-03 05:27:40.399224 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:27:40.399228 | orchestrator | 2026-02-03 05:27:40.399232 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-03 05:27:40.399239 | orchestrator | Tuesday 03 February 2026 05:27:34 +0000 (0:00:02.598) 0:06:38.259 ****** 2026-02-03 05:27:40.399243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:27:40.399252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:27:41.583018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:27:41.583201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:27:41.583220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:27:41.583229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:27:41.583271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:27:41.583279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:27:41.583287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:27:41.583298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:27:41.583306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:27:41.583318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:27:41.583326 | orchestrator | 2026-02-03 05:27:41.583334 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-03 05:27:41.583346 | orchestrator | Tuesday 03 February 2026 05:27:41 +0000 (0:00:07.503) 0:06:45.763 ****** 2026-02-03 05:27:42.370726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:27:42.370863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:27:42.370895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:27:42.370956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:27:42.370991 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:27:42.371028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:27:42.371043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:27:42.371093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:27:42.371108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:27:42.371120 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:27:42.371140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:27:42.371163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:28:05.569781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-03 05:28:05.569959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-03 05:28:05.569981 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:28:05.569996 | orchestrator | 2026-02-03 05:28:05.570009 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-03 05:28:05.570097 | orchestrator | Tuesday 03 February 2026 05:27:43 +0000 (0:00:02.038) 0:06:47.802 ****** 2026-02-03 05:28:05.570111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570207 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:28:05.570226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570295 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:28:05.570315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:28:05.570391 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:28:05.570402 | orchestrator | 2026-02-03 05:28:05.570413 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-03 05:28:05.570424 | orchestrator | Tuesday 03 February 2026 05:27:46 +0000 (0:00:02.944) 0:06:50.747 ****** 2026-02-03 05:28:05.570435 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:28:05.570447 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:28:05.570457 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:28:05.570468 | orchestrator | 2026-02-03 05:28:05.570479 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-03 05:28:05.570517 | orchestrator | Tuesday 03 February 2026 05:27:48 +0000 (0:00:02.329) 0:06:53.076 ****** 2026-02-03 05:28:05.570540 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:28:05.570551 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:28:05.570562 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:28:05.570573 | orchestrator | 2026-02-03 05:28:05.570584 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-03 05:28:05.570595 | orchestrator | Tuesday 03 February 2026 05:27:52 +0000 (0:00:03.222) 0:06:56.299 ****** 2026-02-03 05:28:05.570607 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:28:05.570618 | orchestrator | 2026-02-03 05:28:05.570628 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-03 05:28:05.570639 | orchestrator | Tuesday 03 February 2026 05:27:55 +0000 (0:00:02.948) 0:06:59.248 ****** 2026-02-03 05:28:05.570651 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-03 05:28:05.570663 | orchestrator | 2026-02-03 05:28:05.570674 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-03 05:28:05.570685 | orchestrator | Tuesday 03 February 2026 05:27:56 +0000 (0:00:01.892) 0:07:01.140 ****** 2026-02-03 05:28:05.570697 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-03 05:28:05.570712 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-03 05:28:05.570724 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-03 05:28:05.570735 | orchestrator | 2026-02-03 05:28:05.570747 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-03 05:28:05.570759 | orchestrator | Tuesday 03 February 2026 05:28:02 +0000 (0:00:06.039) 0:07:07.180 ****** 2026-02-03 05:28:05.570771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 05:28:05.570790 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:28:30.481466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 05:28:30.481629 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:28:30.481670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 05:28:30.481684 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:28:30.481695 | orchestrator | 2026-02-03 05:28:30.481714 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-03 05:28:30.481735 | orchestrator | Tuesday 03 February 2026 05:28:05 +0000 (0:00:02.569) 0:07:09.750 ****** 2026-02-03 05:28:30.481755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 05:28:30.481779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 05:28:30.481800 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:28:30.481820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 05:28:30.481839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 05:28:30.481858 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:28:30.481877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 05:28:30.481896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-03 05:28:30.481916 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:28:30.481934 | orchestrator | 2026-02-03 05:28:30.481952 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-03 05:28:30.481972 | orchestrator | Tuesday 03 February 2026 05:28:08 +0000 (0:00:02.739) 0:07:12.489 ****** 2026-02-03 05:28:30.481992 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:28:30.482007 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:28:30.482139 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:28:30.482164 | orchestrator | 2026-02-03 05:28:30.482185 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-03 05:28:30.482204 | orchestrator | Tuesday 03 February 2026 05:28:12 +0000 (0:00:04.059) 0:07:16.549 ****** 2026-02-03 05:28:30.482227 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:28:30.482247 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:28:30.482264 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:28:30.482277 | orchestrator | 2026-02-03 05:28:30.482290 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-03 05:28:30.482304 | orchestrator | Tuesday 03 February 2026 05:28:16 +0000 (0:00:04.372) 0:07:20.922 ****** 2026-02-03 05:28:30.482318 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-03 05:28:30.482350 | orchestrator | 2026-02-03 05:28:30.482367 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-03 05:28:30.482384 | orchestrator | Tuesday 03 February 2026 05:28:18 +0000 (0:00:01.826) 0:07:22.748 ****** 2026-02-03 05:28:30.482426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 05:28:30.482449 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:28:30.482468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 05:28:30.482482 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:28:30.482511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 05:28:30.482533 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:28:30.482551 | orchestrator | 2026-02-03 05:28:30.482565 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-03 05:28:30.482576 | orchestrator | Tuesday 03 February 2026 05:28:21 +0000 (0:00:02.669) 0:07:25.418 ****** 2026-02-03 05:28:30.482587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 05:28:30.482599 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:28:30.482610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 05:28:30.482621 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:28:30.482632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-03 05:28:30.482655 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:28:30.482666 | orchestrator | 2026-02-03 05:28:30.482677 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-03 05:28:30.482688 | orchestrator | Tuesday 03 February 2026 05:28:23 +0000 (0:00:02.714) 0:07:28.132 ****** 2026-02-03 05:28:30.482700 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:28:30.482719 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:28:30.482737 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:28:30.482756 | orchestrator | 2026-02-03 05:28:30.482773 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-03 05:28:30.482793 | orchestrator | Tuesday 03 February 2026 05:28:26 +0000 (0:00:02.628) 0:07:30.761 ****** 2026-02-03 05:28:30.482811 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:28:30.482831 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:28:30.482850 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:28:30.482869 | orchestrator | 2026-02-03 05:28:30.482888 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-03 05:28:30.482903 | orchestrator | Tuesday 03 February 2026 05:28:30 +0000 (0:00:03.895) 0:07:34.656 ****** 2026-02-03 05:29:02.157670 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:29:02.157747 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:29:02.157753 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:29:02.157758 | orchestrator | 2026-02-03 05:29:02.157763 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-03 05:29:02.157768 | orchestrator | Tuesday 03 February 2026 05:28:34 +0000 (0:00:04.320) 0:07:38.977 ****** 2026-02-03 05:29:02.157773 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-03 05:29:02.157778 | orchestrator | 2026-02-03 05:29:02.157782 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-03 05:29:02.157787 | orchestrator | Tuesday 03 February 2026 05:28:37 +0000 (0:00:02.635) 0:07:41.613 ****** 2026-02-03 05:29:02.157807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 05:29:02.157814 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:02.157819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 05:29:02.157823 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:02.157828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 05:29:02.157845 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:02.157849 | orchestrator | 2026-02-03 05:29:02.157853 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-03 05:29:02.157858 | orchestrator | Tuesday 03 February 2026 05:28:40 +0000 (0:00:02.654) 0:07:44.268 ****** 2026-02-03 05:29:02.157862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 05:29:02.157866 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:02.157870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 05:29:02.157873 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:02.157925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-03 05:29:02.157930 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:02.157934 | orchestrator | 2026-02-03 05:29:02.157938 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-03 05:29:02.157942 | orchestrator | Tuesday 03 February 2026 05:28:42 +0000 (0:00:02.784) 0:07:47.052 ****** 2026-02-03 05:29:02.157946 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:02.157950 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:02.157954 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:02.157958 | orchestrator | 2026-02-03 05:29:02.157961 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-03 05:29:02.157965 | orchestrator | Tuesday 03 February 2026 05:28:46 +0000 (0:00:03.195) 0:07:50.247 ****** 2026-02-03 05:29:02.157969 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:29:02.157973 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:29:02.157977 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:29:02.157981 | orchestrator | 2026-02-03 05:29:02.157985 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-03 05:29:02.157989 | orchestrator | Tuesday 03 February 2026 05:28:49 +0000 (0:00:03.791) 0:07:54.038 ****** 2026-02-03 05:29:02.157992 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:29:02.157996 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:29:02.158000 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:29:02.158074 | orchestrator | 2026-02-03 05:29:02.158080 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-03 05:29:02.158088 | orchestrator | Tuesday 03 February 2026 05:28:54 +0000 (0:00:04.799) 0:07:58.838 ****** 2026-02-03 05:29:02.158091 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:29:02.158096 | orchestrator | 2026-02-03 05:29:02.158099 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-03 05:29:02.158109 | orchestrator | Tuesday 03 February 2026 05:28:57 +0000 (0:00:02.819) 0:08:01.658 ****** 2026-02-03 05:29:02.158114 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 05:29:02.158120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 05:29:02.158126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 05:29:02.158135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 05:29:04.418630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:29:04.418755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 05:29:04.418800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 05:29:04.418814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 05:29:04.418827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 05:29:04.418839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:29:04.418872 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-03 05:29:04.418898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 05:29:04.418911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 05:29:04.418922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 05:29:04.418934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:29:04.418946 | orchestrator | 2026-02-03 05:29:04.418959 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-03 05:29:04.418972 | orchestrator | Tuesday 03 February 2026 05:29:03 +0000 (0:00:05.885) 0:08:07.543 ****** 2026-02-03 05:29:04.418992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 05:29:05.665202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 05:29:05.665329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 05:29:05.665346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 05:29:05.665360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:29:05.665372 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:05.665387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 05:29:05.665402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 05:29:05.665448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 05:29:05.665462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 05:29:05.665473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:29:05.665485 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:05.665497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-03 05:29:05.665509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-03 05:29:05.665528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-03 05:29:23.966780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-03 05:29:23.967123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-03 05:29:23.967160 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:23.967185 | orchestrator | 2026-02-03 05:29:23.967208 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-03 05:29:23.967225 | orchestrator | Tuesday 03 February 2026 05:29:05 +0000 (0:00:02.306) 0:08:09.850 ****** 2026-02-03 05:29:23.967240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 05:29:23.967255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 05:29:23.967270 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:23.967281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 05:29:23.967292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 05:29:23.967303 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:23.967314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 05:29:23.967325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-03 05:29:23.967336 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:23.967346 | orchestrator | 2026-02-03 05:29:23.967357 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-03 05:29:23.967368 | orchestrator | Tuesday 03 February 2026 05:29:08 +0000 (0:00:02.417) 0:08:12.268 ****** 2026-02-03 05:29:23.967379 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:29:23.967391 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:29:23.967428 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:29:23.967439 | orchestrator | 2026-02-03 05:29:23.967450 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-03 05:29:23.967461 | orchestrator | Tuesday 03 February 2026 05:29:10 +0000 (0:00:02.402) 0:08:14.671 ****** 2026-02-03 05:29:23.967471 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:29:23.967482 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:29:23.967493 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:29:23.967503 | orchestrator | 2026-02-03 05:29:23.967514 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-03 05:29:23.967525 | orchestrator | Tuesday 03 February 2026 05:29:13 +0000 (0:00:03.169) 0:08:17.840 ****** 2026-02-03 05:29:23.967536 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:29:23.967565 | orchestrator | 2026-02-03 05:29:23.967576 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-03 05:29:23.967587 | orchestrator | Tuesday 03 February 2026 05:29:16 +0000 (0:00:02.889) 0:08:20.730 ****** 2026-02-03 05:29:23.967640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:29:23.967658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:29:23.967670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:29:23.967684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:29:23.967732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:29:28.230867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:29:28.230977 | orchestrator | 2026-02-03 05:29:28.231047 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-03 05:29:28.231061 | orchestrator | Tuesday 03 February 2026 05:29:23 +0000 (0:00:07.412) 0:08:28.142 ****** 2026-02-03 05:29:28.231074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:29:28.231114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:29:28.231128 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:28.231177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:29:28.231191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:29:28.231203 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:28.231215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:29:28.231235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:29:28.231247 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:28.231258 | orchestrator | 2026-02-03 05:29:28.231270 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-03 05:29:28.231287 | orchestrator | Tuesday 03 February 2026 05:29:26 +0000 (0:00:02.426) 0:08:30.569 ****** 2026-02-03 05:29:28.231299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:29:28.231376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-03 05:29:38.010748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-03 05:29:38.010884 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:38.010905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:29:38.010919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-03 05:29:38.010933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-03 05:29:38.010980 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:38.011039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:29:38.011050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-03 05:29:38.011062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-03 05:29:38.011073 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:38.011084 | orchestrator | 2026-02-03 05:29:38.011096 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-03 05:29:38.011108 | orchestrator | Tuesday 03 February 2026 05:29:28 +0000 (0:00:01.847) 0:08:32.416 ****** 2026-02-03 05:29:38.011119 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:38.011130 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:38.011141 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:38.011151 | orchestrator | 2026-02-03 05:29:38.011162 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-03 05:29:38.011173 | orchestrator | Tuesday 03 February 2026 05:29:29 +0000 (0:00:01.542) 0:08:33.959 ****** 2026-02-03 05:29:38.011184 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:38.011195 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:38.011206 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:38.011216 | orchestrator | 2026-02-03 05:29:38.011227 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-03 05:29:38.011238 | orchestrator | Tuesday 03 February 2026 05:29:32 +0000 (0:00:02.591) 0:08:36.551 ****** 2026-02-03 05:29:38.011249 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:29:38.011260 | orchestrator | 2026-02-03 05:29:38.011272 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-03 05:29:38.011285 | orchestrator | Tuesday 03 February 2026 05:29:35 +0000 (0:00:02.784) 0:08:39.336 ****** 2026-02-03 05:29:38.011370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-03 05:29:38.011390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-03 05:29:38.011414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 05:29:38.011426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 05:29:38.011439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:38.011451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:38.011463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:38.011516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:40.116056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 05:29:40.116162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 05:29:40.116182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-03 05:29:40.116197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 05:29:40.116215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:40.116227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:40.116279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 05:29:40.116293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:29:40.116307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-03 05:29:40.116325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:29:40.116351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-03 05:29:42.786324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:42.786463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:42.786489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:42.786502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:42.786514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 05:29:42.786544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 05:29:42.786603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:29:42.786619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-03 05:29:42.786632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:42.786643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:42.786655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 05:29:42.786667 | orchestrator | 2026-02-03 05:29:42.786680 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-03 05:29:42.786692 | orchestrator | Tuesday 03 February 2026 05:29:41 +0000 (0:00:06.516) 0:08:45.853 ****** 2026-02-03 05:29:42.786722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-03 05:29:42.786745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 05:29:43.052637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:43.052797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:43.052819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 05:29:43.052852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:29:43.052942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-03 05:29:43.053068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:43.053100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:43.053121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 05:29:43.053142 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:43.053167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-03 05:29:43.053215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 05:29:43.053238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:43.053259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:43.053295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 05:29:44.264777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:29:44.264903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-03 05:29:44.264964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:44.265038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:44.265062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 05:29:44.265082 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:44.265133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-03 05:29:44.265157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-03 05:29:44.265177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:44.265211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:44.265242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-03 05:29:44.265265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:29:44.265302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-03 05:29:57.605172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:57.605394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:29:57.605449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-03 05:29:57.605471 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:57.605492 | orchestrator | 2026-02-03 05:29:57.605513 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-03 05:29:57.605532 | orchestrator | Tuesday 03 February 2026 05:29:44 +0000 (0:00:02.594) 0:08:48.448 ****** 2026-02-03 05:29:57.605551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-03 05:29:57.605572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-03 05:29:57.605593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:29:57.605612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:29:57.605632 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:57.605651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-03 05:29:57.605670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-03 05:29:57.605719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:29:57.605757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:29:57.605776 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:57.605816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-03 05:29:57.605836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-03 05:29:57.605865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:29:57.605885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-03 05:29:57.605904 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:57.605923 | orchestrator | 2026-02-03 05:29:57.605942 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-03 05:29:57.605959 | orchestrator | Tuesday 03 February 2026 05:29:46 +0000 (0:00:02.239) 0:08:50.687 ****** 2026-02-03 05:29:57.606010 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:57.606118 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:57.606139 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:57.606157 | orchestrator | 2026-02-03 05:29:57.606174 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-03 05:29:57.606192 | orchestrator | Tuesday 03 February 2026 05:29:48 +0000 (0:00:02.108) 0:08:52.795 ****** 2026-02-03 05:29:57.606211 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:29:57.606229 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:29:57.606247 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:29:57.606265 | orchestrator | 2026-02-03 05:29:57.606283 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-03 05:29:57.606301 | orchestrator | Tuesday 03 February 2026 05:29:50 +0000 (0:00:02.364) 0:08:55.160 ****** 2026-02-03 05:29:57.606318 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:29:57.606334 | orchestrator | 2026-02-03 05:29:57.606350 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-03 05:29:57.606360 | orchestrator | Tuesday 03 February 2026 05:29:53 +0000 (0:00:02.511) 0:08:57.672 ****** 2026-02-03 05:29:57.606388 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:30:16.720844 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:30:16.721045 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:30:16.721843 | orchestrator | 2026-02-03 05:30:16.721869 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-03 05:30:16.721882 | orchestrator | Tuesday 03 February 2026 05:29:57 +0000 (0:00:04.110) 0:09:01.782 ****** 2026-02-03 05:30:16.721895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:30:16.721909 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:16.721993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:30:16.722009 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:16.722071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:30:16.722083 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:16.722094 | orchestrator | 2026-02-03 05:30:16.722106 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-03 05:30:16.722117 | orchestrator | Tuesday 03 February 2026 05:29:59 +0000 (0:00:01.491) 0:09:03.273 ****** 2026-02-03 05:30:16.722129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-03 05:30:16.722150 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:16.722161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-03 05:30:16.722172 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:16.722184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-03 05:30:16.722194 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:16.722205 | orchestrator | 2026-02-03 05:30:16.722216 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-03 05:30:16.722227 | orchestrator | Tuesday 03 February 2026 05:30:00 +0000 (0:00:01.611) 0:09:04.885 ****** 2026-02-03 05:30:16.722238 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:16.722249 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:16.722260 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:16.722270 | orchestrator | 2026-02-03 05:30:16.722281 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-03 05:30:16.722292 | orchestrator | Tuesday 03 February 2026 05:30:03 +0000 (0:00:02.309) 0:09:07.194 ****** 2026-02-03 05:30:16.722303 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:16.722314 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:16.722335 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:16.722346 | orchestrator | 2026-02-03 05:30:16.722356 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-03 05:30:16.722367 | orchestrator | Tuesday 03 February 2026 05:30:05 +0000 (0:00:02.526) 0:09:09.721 ****** 2026-02-03 05:30:16.722378 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:30:16.722389 | orchestrator | 2026-02-03 05:30:16.722399 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-03 05:30:16.722410 | orchestrator | Tuesday 03 February 2026 05:30:08 +0000 (0:00:02.517) 0:09:12.238 ****** 2026-02-03 05:30:16.722422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-03 05:30:16.722446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-03 05:30:18.543204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-03 05:30:18.543305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-03 05:30:18.543345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-03 05:30:18.543378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-03 05:30:18.543392 | orchestrator | 2026-02-03 05:30:18.543405 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-03 05:30:18.543418 | orchestrator | Tuesday 03 February 2026 05:30:16 +0000 (0:00:08.656) 0:09:20.895 ****** 2026-02-03 05:30:18.543438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-03 05:30:18.543462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-03 05:30:18.543474 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:18.543487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-03 05:30:18.543507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-03 05:30:41.472938 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:41.473085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-03 05:30:41.473112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-03 05:30:41.473118 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:41.473123 | orchestrator | 2026-02-03 05:30:41.473127 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-03 05:30:41.473133 | orchestrator | Tuesday 03 February 2026 05:30:18 +0000 (0:00:01.828) 0:09:22.723 ****** 2026-02-03 05:30:41.473139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-03 05:30:41.473145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-03 05:30:41.473152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:30:41.473157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:30:41.473161 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:41.473166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-03 05:30:41.473170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-03 05:30:41.473201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:30:41.473218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:30:41.473225 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:41.473231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-03 05:30:41.473238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-03 05:30:41.473245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:30:41.473253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-03 05:30:41.473260 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:41.473267 | orchestrator | 2026-02-03 05:30:41.473274 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-03 05:30:41.473282 | orchestrator | Tuesday 03 February 2026 05:30:20 +0000 (0:00:02.208) 0:09:24.932 ****** 2026-02-03 05:30:41.473289 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:30:41.473295 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:30:41.473302 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:30:41.473309 | orchestrator | 2026-02-03 05:30:41.473315 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-03 05:30:41.473322 | orchestrator | Tuesday 03 February 2026 05:30:23 +0000 (0:00:02.427) 0:09:27.359 ****** 2026-02-03 05:30:41.473328 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:30:41.473334 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:30:41.473339 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:30:41.473346 | orchestrator | 2026-02-03 05:30:41.473352 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-03 05:30:41.473358 | orchestrator | Tuesday 03 February 2026 05:30:26 +0000 (0:00:03.205) 0:09:30.564 ****** 2026-02-03 05:30:41.473364 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:41.473370 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:41.473377 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:41.473383 | orchestrator | 2026-02-03 05:30:41.473390 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-03 05:30:41.473397 | orchestrator | Tuesday 03 February 2026 05:30:27 +0000 (0:00:01.510) 0:09:32.075 ****** 2026-02-03 05:30:41.473404 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:41.473411 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:41.473418 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:41.473425 | orchestrator | 2026-02-03 05:30:41.473430 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-03 05:30:41.473434 | orchestrator | Tuesday 03 February 2026 05:30:29 +0000 (0:00:01.464) 0:09:33.540 ****** 2026-02-03 05:30:41.473438 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:41.473442 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:41.473446 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:41.473450 | orchestrator | 2026-02-03 05:30:41.473454 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-03 05:30:41.473463 | orchestrator | Tuesday 03 February 2026 05:30:31 +0000 (0:00:01.857) 0:09:35.398 ****** 2026-02-03 05:30:41.473468 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:41.473480 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:41.473484 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:41.473489 | orchestrator | 2026-02-03 05:30:41.473493 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-03 05:30:41.473498 | orchestrator | Tuesday 03 February 2026 05:30:32 +0000 (0:00:01.598) 0:09:36.996 ****** 2026-02-03 05:30:41.473504 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:41.473508 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:30:41.473513 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:30:41.473518 | orchestrator | 2026-02-03 05:30:41.473523 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-03 05:30:41.473527 | orchestrator | Tuesday 03 February 2026 05:30:34 +0000 (0:00:01.586) 0:09:38.583 ****** 2026-02-03 05:30:41.473532 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:30:41.473538 | orchestrator | 2026-02-03 05:30:41.473543 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-03 05:30:41.473547 | orchestrator | Tuesday 03 February 2026 05:30:37 +0000 (0:00:02.990) 0:09:41.573 ****** 2026-02-03 05:30:41.473563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-03 05:30:46.160766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-03 05:30:46.160878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-03 05:30:46.160901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:30:46.160989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:30:46.161008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-03 05:30:46.161032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:30:46.161059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:30:46.161068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-03 05:30:46.161077 | orchestrator | 2026-02-03 05:30:46.161087 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-03 05:30:46.161099 | orchestrator | Tuesday 03 February 2026 05:30:41 +0000 (0:00:04.080) 0:09:45.654 ****** 2026-02-03 05:30:46.161115 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:30:46.161129 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:30:46.161142 | orchestrator | } 2026-02-03 05:30:46.161155 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:30:46.161167 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:30:46.161179 | orchestrator | } 2026-02-03 05:30:46.161191 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:30:46.161203 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:30:46.161215 | orchestrator | } 2026-02-03 05:30:46.161227 | orchestrator | 2026-02-03 05:30:46.161239 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:30:46.161251 | orchestrator | Tuesday 03 February 2026 05:30:42 +0000 (0:00:01.488) 0:09:47.143 ****** 2026-02-03 05:30:46.161276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-03 05:30:46.161289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:30:46.161302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:30:46.161316 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:30:46.161338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-03 05:30:46.161365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:32:49.434087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:32:49.434211 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:32:49.434255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-03 05:32:49.434270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-03 05:32:49.434283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-03 05:32:49.434294 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:32:49.434306 | orchestrator | 2026-02-03 05:32:49.434317 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-03 05:32:49.434330 | orchestrator | Tuesday 03 February 2026 05:30:46 +0000 (0:00:03.192) 0:09:50.335 ****** 2026-02-03 05:32:49.434341 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:32:49.434353 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:32:49.434364 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:32:49.434375 | orchestrator | 2026-02-03 05:32:49.434386 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-03 05:32:49.434397 | orchestrator | Tuesday 03 February 2026 05:30:48 +0000 (0:00:01.889) 0:09:52.224 ****** 2026-02-03 05:32:49.434407 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:32:49.434418 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:32:49.434429 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:32:49.434440 | orchestrator | 2026-02-03 05:32:49.434454 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-03 05:32:49.434467 | orchestrator | Tuesday 03 February 2026 05:30:49 +0000 (0:00:01.475) 0:09:53.700 ****** 2026-02-03 05:32:49.434486 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:32:49.434507 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:32:49.434527 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:32:49.434545 | orchestrator | 2026-02-03 05:32:49.434565 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-03 05:32:49.434584 | orchestrator | Tuesday 03 February 2026 05:30:56 +0000 (0:00:07.242) 0:10:00.942 ****** 2026-02-03 05:32:49.434604 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:32:49.434625 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:32:49.434645 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:32:49.434667 | orchestrator | 2026-02-03 05:32:49.434688 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-03 05:32:49.434708 | orchestrator | Tuesday 03 February 2026 05:31:04 +0000 (0:00:07.631) 0:10:08.574 ****** 2026-02-03 05:32:49.434724 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:32:49.434735 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:32:49.434755 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:32:49.434765 | orchestrator | 2026-02-03 05:32:49.434776 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-03 05:32:49.434788 | orchestrator | Tuesday 03 February 2026 05:31:11 +0000 (0:00:07.283) 0:10:15.857 ****** 2026-02-03 05:32:49.434798 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:32:49.434809 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:32:49.434820 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:32:49.434831 | orchestrator | 2026-02-03 05:32:49.434868 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-03 05:32:49.434881 | orchestrator | Tuesday 03 February 2026 05:31:19 +0000 (0:00:08.206) 0:10:24.064 ****** 2026-02-03 05:32:49.434929 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:32:49.434949 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:32:49.434966 | orchestrator | 2026-02-03 05:32:49.434983 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-03 05:32:49.434996 | orchestrator | Tuesday 03 February 2026 05:31:23 +0000 (0:00:03.904) 0:10:27.969 ****** 2026-02-03 05:32:49.435015 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:32:49.435033 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:32:49.435051 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:32:49.435070 | orchestrator | 2026-02-03 05:32:49.435088 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-03 05:32:49.435106 | orchestrator | Tuesday 03 February 2026 05:31:36 +0000 (0:00:13.010) 0:10:40.979 ****** 2026-02-03 05:32:49.435125 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:32:49.435144 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:32:49.435163 | orchestrator | 2026-02-03 05:32:49.435177 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-03 05:32:49.435188 | orchestrator | Tuesday 03 February 2026 05:31:40 +0000 (0:00:03.899) 0:10:44.878 ****** 2026-02-03 05:32:49.435199 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:32:49.435210 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:32:49.435225 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:32:49.435242 | orchestrator | 2026-02-03 05:32:49.435261 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-03 05:32:49.435278 | orchestrator | Tuesday 03 February 2026 05:31:48 +0000 (0:00:07.931) 0:10:52.810 ****** 2026-02-03 05:32:49.435295 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:32:49.435313 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:32:49.435330 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:32:49.435347 | orchestrator | 2026-02-03 05:32:49.435422 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-03 05:32:49.435446 | orchestrator | Tuesday 03 February 2026 05:31:55 +0000 (0:00:06.998) 0:10:59.808 ****** 2026-02-03 05:32:49.435463 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:32:49.435482 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:32:49.435500 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:32:49.435518 | orchestrator | 2026-02-03 05:32:49.435536 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-03 05:32:49.435554 | orchestrator | Tuesday 03 February 2026 05:32:02 +0000 (0:00:07.025) 0:11:06.834 ****** 2026-02-03 05:32:49.435574 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:32:49.435591 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:32:49.435607 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:32:49.435625 | orchestrator | 2026-02-03 05:32:49.435642 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-03 05:32:49.435660 | orchestrator | Tuesday 03 February 2026 05:32:09 +0000 (0:00:07.008) 0:11:13.843 ****** 2026-02-03 05:32:49.435677 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:32:49.435696 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:32:49.435713 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:32:49.435731 | orchestrator | 2026-02-03 05:32:49.435750 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-03 05:32:49.435785 | orchestrator | Tuesday 03 February 2026 05:32:17 +0000 (0:00:07.653) 0:11:21.496 ****** 2026-02-03 05:32:49.435804 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:32:49.435822 | orchestrator | 2026-02-03 05:32:49.435840 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-03 05:32:49.435857 | orchestrator | Tuesday 03 February 2026 05:32:20 +0000 (0:00:03.640) 0:11:25.137 ****** 2026-02-03 05:32:49.435876 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:32:49.435922 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:32:49.435941 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:32:49.435960 | orchestrator | 2026-02-03 05:32:49.435979 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-03 05:32:49.436000 | orchestrator | Tuesday 03 February 2026 05:32:33 +0000 (0:00:12.733) 0:11:37.870 ****** 2026-02-03 05:32:49.436020 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:32:49.436041 | orchestrator | 2026-02-03 05:32:49.436060 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-03 05:32:49.436079 | orchestrator | Tuesday 03 February 2026 05:32:37 +0000 (0:00:03.688) 0:11:41.558 ****** 2026-02-03 05:32:49.436099 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:32:49.436118 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:32:49.436138 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:32:49.436157 | orchestrator | 2026-02-03 05:32:49.436177 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-03 05:32:49.436196 | orchestrator | Tuesday 03 February 2026 05:32:44 +0000 (0:00:07.265) 0:11:48.824 ****** 2026-02-03 05:32:49.436216 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:32:49.436246 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:32:49.436266 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:32:49.436285 | orchestrator | 2026-02-03 05:32:49.436305 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-03 05:32:49.436324 | orchestrator | Tuesday 03 February 2026 05:32:46 +0000 (0:00:02.249) 0:11:51.074 ****** 2026-02-03 05:32:49.436344 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:32:49.436363 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:32:49.436383 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:32:49.436402 | orchestrator | 2026-02-03 05:32:49.436421 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:32:49.436442 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-03 05:32:49.436464 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-03 05:32:49.436504 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-03 05:32:50.471533 | orchestrator | 2026-02-03 05:32:50.471639 | orchestrator | 2026-02-03 05:32:50.471655 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:32:50.471669 | orchestrator | Tuesday 03 February 2026 05:32:49 +0000 (0:00:02.533) 0:11:53.607 ****** 2026-02-03 05:32:50.471680 | orchestrator | =============================================================================== 2026-02-03 05:32:50.471692 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.01s 2026-02-03 05:32:50.471703 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.73s 2026-02-03 05:32:50.471714 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.66s 2026-02-03 05:32:50.471725 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.21s 2026-02-03 05:32:50.471736 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.93s 2026-02-03 05:32:50.471746 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.65s 2026-02-03 05:32:50.471758 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.63s 2026-02-03 05:32:50.471795 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.50s 2026-02-03 05:32:50.471806 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.41s 2026-02-03 05:32:50.471817 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.28s 2026-02-03 05:32:50.471828 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.27s 2026-02-03 05:32:50.471839 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.24s 2026-02-03 05:32:50.471849 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 7.03s 2026-02-03 05:32:50.471860 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 7.01s 2026-02-03 05:32:50.471871 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 7.00s 2026-02-03 05:32:50.471882 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 6.52s 2026-02-03 05:32:50.471952 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.49s 2026-02-03 05:32:50.471964 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.24s 2026-02-03 05:32:50.471975 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.04s 2026-02-03 05:32:50.471986 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.89s 2026-02-03 05:32:50.857010 | orchestrator | + osism apply -a upgrade opensearch 2026-02-03 05:32:53.110479 | orchestrator | 2026-02-03 05:32:53 | INFO  | Task 8f916e07-4f1b-4178-a32e-69d1f71ac740 (opensearch) was prepared for execution. 2026-02-03 05:32:53.110580 | orchestrator | 2026-02-03 05:32:53 | INFO  | It takes a moment until task 8f916e07-4f1b-4178-a32e-69d1f71ac740 (opensearch) has been started and output is visible here. 2026-02-03 05:33:13.282520 | orchestrator | 2026-02-03 05:33:13.282611 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 05:33:13.282621 | orchestrator | 2026-02-03 05:33:13.282628 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 05:33:13.282635 | orchestrator | Tuesday 03 February 2026 05:32:59 +0000 (0:00:01.624) 0:00:01.624 ****** 2026-02-03 05:33:13.282641 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:33:13.282649 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:33:13.282657 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:33:13.282663 | orchestrator | 2026-02-03 05:33:13.282670 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 05:33:13.282676 | orchestrator | Tuesday 03 February 2026 05:33:01 +0000 (0:00:02.072) 0:00:03.697 ****** 2026-02-03 05:33:13.282683 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-03 05:33:13.282690 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-03 05:33:13.282696 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-03 05:33:13.282703 | orchestrator | 2026-02-03 05:33:13.282709 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-03 05:33:13.282715 | orchestrator | 2026-02-03 05:33:13.282721 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-03 05:33:13.282728 | orchestrator | Tuesday 03 February 2026 05:33:04 +0000 (0:00:02.600) 0:00:06.298 ****** 2026-02-03 05:33:13.282748 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:33:13.282755 | orchestrator | 2026-02-03 05:33:13.282761 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-03 05:33:13.282767 | orchestrator | Tuesday 03 February 2026 05:33:06 +0000 (0:00:02.223) 0:00:08.521 ****** 2026-02-03 05:33:13.282774 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-03 05:33:13.282780 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-03 05:33:13.282786 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-03 05:33:13.282807 | orchestrator | 2026-02-03 05:33:13.282814 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-03 05:33:13.282820 | orchestrator | Tuesday 03 February 2026 05:33:08 +0000 (0:00:02.237) 0:00:10.759 ****** 2026-02-03 05:33:13.282830 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:13.282840 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:13.282862 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:13.282927 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:13.282953 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:13.282966 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:13.282978 | orchestrator | 2026-02-03 05:33:13.282985 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-03 05:33:13.282991 | orchestrator | Tuesday 03 February 2026 05:33:11 +0000 (0:00:02.689) 0:00:13.448 ****** 2026-02-03 05:33:13.282998 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:33:13.283004 | orchestrator | 2026-02-03 05:33:13.283016 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-03 05:33:19.397962 | orchestrator | Tuesday 03 February 2026 05:33:13 +0000 (0:00:01.863) 0:00:15.312 ****** 2026-02-03 05:33:19.398101 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:19.398132 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:19.398137 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:19.398143 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:19.398165 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:19.398176 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:19.398182 | orchestrator | 2026-02-03 05:33:19.398187 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-03 05:33:19.398192 | orchestrator | Tuesday 03 February 2026 05:33:17 +0000 (0:00:04.123) 0:00:19.435 ****** 2026-02-03 05:33:19.398196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:33:19.398207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:33:21.435244 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:33:21.435351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:33:21.435382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:33:21.435392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:33:21.435403 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:33:21.435425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:33:21.435441 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:33:21.435450 | orchestrator | 2026-02-03 05:33:21.435458 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-03 05:33:21.435467 | orchestrator | Tuesday 03 February 2026 05:33:19 +0000 (0:00:01.996) 0:00:21.432 ****** 2026-02-03 05:33:21.435478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:33:21.435487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:33:21.435495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:33:21.435503 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:33:21.435516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:33:25.374609 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:33:25.374721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:33:25.374741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:33:25.374757 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:33:25.374769 | orchestrator | 2026-02-03 05:33:25.374781 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-03 05:33:25.374793 | orchestrator | Tuesday 03 February 2026 05:33:21 +0000 (0:00:02.033) 0:00:23.465 ****** 2026-02-03 05:33:25.374805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:25.374867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:25.374930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:25.374943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:25.374956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:25.374990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:39.889144 | orchestrator | 2026-02-03 05:33:39.889229 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-03 05:33:39.889237 | orchestrator | Tuesday 03 February 2026 05:33:25 +0000 (0:00:03.941) 0:00:27.406 ****** 2026-02-03 05:33:39.889242 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:33:39.889247 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:33:39.889252 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:33:39.889255 | orchestrator | 2026-02-03 05:33:39.889260 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-03 05:33:39.889264 | orchestrator | Tuesday 03 February 2026 05:33:29 +0000 (0:00:03.873) 0:00:31.280 ****** 2026-02-03 05:33:39.889268 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:33:39.889272 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:33:39.889276 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:33:39.889280 | orchestrator | 2026-02-03 05:33:39.889284 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-03 05:33:39.889288 | orchestrator | Tuesday 03 February 2026 05:33:32 +0000 (0:00:03.249) 0:00:34.529 ****** 2026-02-03 05:33:39.889294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:39.889301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:39.889323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-03 05:33:39.889359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:39.889370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:39.889378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-03 05:33:39.889391 | orchestrator | 2026-02-03 05:33:39.889398 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-03 05:33:39.889403 | orchestrator | Tuesday 03 February 2026 05:33:36 +0000 (0:00:03.841) 0:00:38.371 ****** 2026-02-03 05:33:39.889407 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:33:39.889412 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:33:39.889416 | orchestrator | } 2026-02-03 05:33:39.889420 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:33:39.889424 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:33:39.889427 | orchestrator | } 2026-02-03 05:33:39.889431 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:33:39.889435 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:33:39.889438 | orchestrator | } 2026-02-03 05:33:39.889442 | orchestrator | 2026-02-03 05:33:39.889446 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:33:39.889450 | orchestrator | Tuesday 03 February 2026 05:33:37 +0000 (0:00:01.432) 0:00:39.803 ****** 2026-02-03 05:33:39.889461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:36:45.211030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:36:45.211168 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:36:45.211188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:36:45.211202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-03 05:36:45.211241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:36:45.211256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-03 05:36:45.211277 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:36:45.211289 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:36:45.211300 | orchestrator | 2026-02-03 05:36:45.211312 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-03 05:36:45.211324 | orchestrator | Tuesday 03 February 2026 05:33:39 +0000 (0:00:02.116) 0:00:41.919 ****** 2026-02-03 05:36:45.211335 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:36:45.211346 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:36:45.211356 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:36:45.211367 | orchestrator | 2026-02-03 05:36:45.211379 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-03 05:36:45.211389 | orchestrator | Tuesday 03 February 2026 05:33:41 +0000 (0:00:01.679) 0:00:43.599 ****** 2026-02-03 05:36:45.211400 | orchestrator | 2026-02-03 05:36:45.211412 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-03 05:36:45.211422 | orchestrator | Tuesday 03 February 2026 05:33:42 +0000 (0:00:00.461) 0:00:44.060 ****** 2026-02-03 05:36:45.211433 | orchestrator | 2026-02-03 05:36:45.211444 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-03 05:36:45.211455 | orchestrator | Tuesday 03 February 2026 05:33:42 +0000 (0:00:00.471) 0:00:44.531 ****** 2026-02-03 05:36:45.211466 | orchestrator | 2026-02-03 05:36:45.211476 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-03 05:36:45.211488 | orchestrator | Tuesday 03 February 2026 05:33:43 +0000 (0:00:00.873) 0:00:45.405 ****** 2026-02-03 05:36:45.211499 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:36:45.211511 | orchestrator | 2026-02-03 05:36:45.211522 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-03 05:36:45.211532 | orchestrator | Tuesday 03 February 2026 05:33:47 +0000 (0:00:03.716) 0:00:49.121 ****** 2026-02-03 05:36:45.211543 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:36:45.211557 | orchestrator | 2026-02-03 05:36:45.211569 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-03 05:36:45.211582 | orchestrator | Tuesday 03 February 2026 05:33:51 +0000 (0:00:04.575) 0:00:53.697 ****** 2026-02-03 05:36:45.211594 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:36:45.211607 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:36:45.211619 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:36:45.211631 | orchestrator | 2026-02-03 05:36:45.211643 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-03 05:36:45.211655 | orchestrator | Tuesday 03 February 2026 05:34:59 +0000 (0:01:08.222) 0:02:01.920 ****** 2026-02-03 05:36:45.211668 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:36:45.211680 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:36:45.211691 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:36:45.211701 | orchestrator | 2026-02-03 05:36:45.211712 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-03 05:36:45.211723 | orchestrator | Tuesday 03 February 2026 05:36:34 +0000 (0:01:34.627) 0:03:36.548 ****** 2026-02-03 05:36:45.211734 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:36:45.211745 | orchestrator | 2026-02-03 05:36:45.211760 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-03 05:36:45.211771 | orchestrator | Tuesday 03 February 2026 05:36:36 +0000 (0:00:02.203) 0:03:38.752 ****** 2026-02-03 05:36:45.211782 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:36:45.211793 | orchestrator | 2026-02-03 05:36:45.211850 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-03 05:36:45.211872 | orchestrator | Tuesday 03 February 2026 05:36:40 +0000 (0:00:03.707) 0:03:42.459 ****** 2026-02-03 05:36:45.211883 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:36:45.211894 | orchestrator | 2026-02-03 05:36:45.211905 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-03 05:36:45.211917 | orchestrator | Tuesday 03 February 2026 05:36:43 +0000 (0:00:03.359) 0:03:45.819 ****** 2026-02-03 05:36:45.211928 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:36:45.211938 | orchestrator | 2026-02-03 05:36:45.211949 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-03 05:36:45.211968 | orchestrator | Tuesday 03 February 2026 05:36:45 +0000 (0:00:01.416) 0:03:47.236 ****** 2026-02-03 05:36:47.907290 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:36:47.907394 | orchestrator | 2026-02-03 05:36:47.907411 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:36:47.907425 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:36:47.907438 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 05:36:47.907450 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 05:36:47.907461 | orchestrator | 2026-02-03 05:36:47.907472 | orchestrator | 2026-02-03 05:36:47.907483 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:36:47.907494 | orchestrator | Tuesday 03 February 2026 05:36:47 +0000 (0:00:02.263) 0:03:49.499 ****** 2026-02-03 05:36:47.907505 | orchestrator | =============================================================================== 2026-02-03 05:36:47.907516 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 94.63s 2026-02-03 05:36:47.907527 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.22s 2026-02-03 05:36:47.907538 | orchestrator | opensearch : Perform a flush -------------------------------------------- 4.58s 2026-02-03 05:36:47.907549 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 4.12s 2026-02-03 05:36:47.907560 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.94s 2026-02-03 05:36:47.907571 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.87s 2026-02-03 05:36:47.907582 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.84s 2026-02-03 05:36:47.907593 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.72s 2026-02-03 05:36:47.907604 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.71s 2026-02-03 05:36:47.907615 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.36s 2026-02-03 05:36:47.907626 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.25s 2026-02-03 05:36:47.907637 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.69s 2026-02-03 05:36:47.907648 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.60s 2026-02-03 05:36:47.907659 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.26s 2026-02-03 05:36:47.907670 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.24s 2026-02-03 05:36:47.907680 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.22s 2026-02-03 05:36:47.907691 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.20s 2026-02-03 05:36:47.907702 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.12s 2026-02-03 05:36:47.907713 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.07s 2026-02-03 05:36:47.907725 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 2.03s 2026-02-03 05:36:48.297089 | orchestrator | + osism apply -a upgrade memcached 2026-02-03 05:36:50.618364 | orchestrator | 2026-02-03 05:36:50 | INFO  | Task 7144aa34-6678-4839-a257-472804f5a96b (memcached) was prepared for execution. 2026-02-03 05:36:50.618447 | orchestrator | 2026-02-03 05:36:50 | INFO  | It takes a moment until task 7144aa34-6678-4839-a257-472804f5a96b (memcached) has been started and output is visible here. 2026-02-03 05:37:26.410934 | orchestrator | 2026-02-03 05:37:26.411030 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 05:37:26.411041 | orchestrator | 2026-02-03 05:37:26.411049 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 05:37:26.411056 | orchestrator | Tuesday 03 February 2026 05:36:57 +0000 (0:00:01.534) 0:00:01.534 ****** 2026-02-03 05:37:26.411063 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:37:26.411071 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:37:26.411078 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:37:26.411085 | orchestrator | 2026-02-03 05:37:26.411092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 05:37:26.411099 | orchestrator | Tuesday 03 February 2026 05:36:59 +0000 (0:00:02.096) 0:00:03.631 ****** 2026-02-03 05:37:26.411119 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-03 05:37:26.411127 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-03 05:37:26.411134 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-03 05:37:26.411141 | orchestrator | 2026-02-03 05:37:26.411148 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-03 05:37:26.411154 | orchestrator | 2026-02-03 05:37:26.411161 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-03 05:37:26.411168 | orchestrator | Tuesday 03 February 2026 05:37:01 +0000 (0:00:02.474) 0:00:06.106 ****** 2026-02-03 05:37:26.411175 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:37:26.411182 | orchestrator | 2026-02-03 05:37:26.411189 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-03 05:37:26.411196 | orchestrator | Tuesday 03 February 2026 05:37:04 +0000 (0:00:02.570) 0:00:08.676 ****** 2026-02-03 05:37:26.411203 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-03 05:37:26.411210 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-03 05:37:26.411217 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-03 05:37:26.411223 | orchestrator | 2026-02-03 05:37:26.411230 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-03 05:37:26.411237 | orchestrator | Tuesday 03 February 2026 05:37:06 +0000 (0:00:01.889) 0:00:10.565 ****** 2026-02-03 05:37:26.411243 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-03 05:37:26.411250 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-03 05:37:26.411257 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-03 05:37:26.411264 | orchestrator | 2026-02-03 05:37:26.411270 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-03 05:37:26.411277 | orchestrator | Tuesday 03 February 2026 05:37:08 +0000 (0:00:02.886) 0:00:13.452 ****** 2026-02-03 05:37:26.411287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-03 05:37:26.411317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-03 05:37:26.411336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-03 05:37:26.411344 | orchestrator | 2026-02-03 05:37:26.411351 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-03 05:37:26.411358 | orchestrator | Tuesday 03 February 2026 05:37:11 +0000 (0:00:02.515) 0:00:15.968 ****** 2026-02-03 05:37:26.411365 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:37:26.411372 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:37:26.411379 | orchestrator | } 2026-02-03 05:37:26.411386 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:37:26.411393 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:37:26.411400 | orchestrator | } 2026-02-03 05:37:26.411406 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:37:26.411413 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:37:26.411420 | orchestrator | } 2026-02-03 05:37:26.411427 | orchestrator | 2026-02-03 05:37:26.411437 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:37:26.411444 | orchestrator | Tuesday 03 February 2026 05:37:13 +0000 (0:00:01.536) 0:00:17.504 ****** 2026-02-03 05:37:26.411452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-03 05:37:26.411459 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:37:26.411467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-03 05:37:26.411480 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:37:26.411489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-03 05:37:26.411497 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:37:26.411505 | orchestrator | 2026-02-03 05:37:26.411512 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-03 05:37:26.411520 | orchestrator | Tuesday 03 February 2026 05:37:15 +0000 (0:00:02.208) 0:00:19.713 ****** 2026-02-03 05:37:26.411527 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:37:26.411536 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:37:26.411543 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:37:26.411551 | orchestrator | 2026-02-03 05:37:26.411558 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:37:26.411567 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 05:37:26.411576 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 05:37:26.411584 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 05:37:26.411592 | orchestrator | 2026-02-03 05:37:26.411600 | orchestrator | 2026-02-03 05:37:26.411608 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:37:26.411620 | orchestrator | Tuesday 03 February 2026 05:37:26 +0000 (0:00:11.155) 0:00:30.868 ****** 2026-02-03 05:37:26.787948 | orchestrator | =============================================================================== 2026-02-03 05:37:26.788051 | orchestrator | memcached : Restart memcached container -------------------------------- 11.16s 2026-02-03 05:37:26.788067 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.89s 2026-02-03 05:37:26.788079 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.57s 2026-02-03 05:37:26.788091 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.52s 2026-02-03 05:37:26.788102 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.47s 2026-02-03 05:37:26.788134 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.21s 2026-02-03 05:37:26.788145 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.10s 2026-02-03 05:37:26.788172 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.89s 2026-02-03 05:37:26.788183 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.54s 2026-02-03 05:37:27.206565 | orchestrator | + osism apply -a upgrade redis 2026-02-03 05:37:29.434863 | orchestrator | 2026-02-03 05:37:29 | INFO  | Task 320fdbbd-bee7-4302-8286-257035e684d4 (redis) was prepared for execution. 2026-02-03 05:37:29.434965 | orchestrator | 2026-02-03 05:37:29 | INFO  | It takes a moment until task 320fdbbd-bee7-4302-8286-257035e684d4 (redis) has been started and output is visible here. 2026-02-03 05:37:42.912583 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-03 05:37:42.912676 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-03 05:37:42.912705 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-03 05:37:42.912715 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-03 05:37:42.912737 | orchestrator | 2026-02-03 05:37:42.912748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 05:37:42.912760 | orchestrator | 2026-02-03 05:37:42.912771 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 05:37:42.912783 | orchestrator | Tuesday 03 February 2026 05:37:35 +0000 (0:00:01.421) 0:00:01.421 ****** 2026-02-03 05:37:42.912854 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:37:42.912870 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:37:42.912881 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:37:42.912891 | orchestrator | 2026-02-03 05:37:42.912901 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 05:37:42.912911 | orchestrator | Tuesday 03 February 2026 05:37:36 +0000 (0:00:00.945) 0:00:02.367 ****** 2026-02-03 05:37:42.912921 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-03 05:37:42.912932 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-03 05:37:42.912943 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-03 05:37:42.912953 | orchestrator | 2026-02-03 05:37:42.912965 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-03 05:37:42.912975 | orchestrator | 2026-02-03 05:37:42.912985 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-03 05:37:42.912995 | orchestrator | Tuesday 03 February 2026 05:37:37 +0000 (0:00:01.039) 0:00:03.406 ****** 2026-02-03 05:37:42.913005 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:37:42.913017 | orchestrator | 2026-02-03 05:37:42.913028 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-03 05:37:42.913039 | orchestrator | Tuesday 03 February 2026 05:37:38 +0000 (0:00:01.240) 0:00:04.647 ****** 2026-02-03 05:37:42.913052 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913065 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913072 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913126 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913154 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913162 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913170 | orchestrator | 2026-02-03 05:37:42.913177 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-03 05:37:42.913184 | orchestrator | Tuesday 03 February 2026 05:37:40 +0000 (0:00:01.629) 0:00:06.276 ****** 2026-02-03 05:37:42.913192 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913199 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913207 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913222 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:42.913236 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281411 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281518 | orchestrator | 2026-02-03 05:37:48.281534 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-03 05:37:48.281546 | orchestrator | Tuesday 03 February 2026 05:37:42 +0000 (0:00:02.293) 0:00:08.570 ****** 2026-02-03 05:37:48.281558 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281602 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281627 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281638 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281666 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281677 | orchestrator | 2026-02-03 05:37:48.281687 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-03 05:37:48.281697 | orchestrator | Tuesday 03 February 2026 05:37:46 +0000 (0:00:03.122) 0:00:11.693 ****** 2026-02-03 05:37:48.281708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:37:48.281784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-03 05:38:11.768115 | orchestrator | 2026-02-03 05:38:11.768264 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-03 05:38:11.768294 | orchestrator | Tuesday 03 February 2026 05:37:48 +0000 (0:00:02.248) 0:00:13.942 ****** 2026-02-03 05:38:11.768316 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:38:11.768336 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:38:11.768353 | orchestrator | } 2026-02-03 05:38:11.768369 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:38:11.768387 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:38:11.768405 | orchestrator | } 2026-02-03 05:38:11.768423 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:38:11.768440 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:38:11.768458 | orchestrator | } 2026-02-03 05:38:11.768476 | orchestrator | 2026-02-03 05:38:11.768494 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:38:11.768511 | orchestrator | Tuesday 03 February 2026 05:37:48 +0000 (0:00:00.638) 0:00:14.580 ****** 2026-02-03 05:38:11.768532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-03 05:38:11.768587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-03 05:38:11.768608 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-03 05:38:11.768628 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-03 05:38:11.768665 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:38:11.768702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-03 05:38:11.768722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-03 05:38:11.768740 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:38:11.768845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-03 05:38:11.768873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-03 05:38:11.768912 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:38:11.768933 | orchestrator | 2026-02-03 05:38:11.768954 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-03 05:38:11.768973 | orchestrator | Tuesday 03 February 2026 05:37:50 +0000 (0:00:01.118) 0:00:15.699 ****** 2026-02-03 05:38:11.768993 | orchestrator | 2026-02-03 05:38:11.769013 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-03 05:38:11.769033 | orchestrator | Tuesday 03 February 2026 05:37:50 +0000 (0:00:00.101) 0:00:15.800 ****** 2026-02-03 05:38:11.769053 | orchestrator | 2026-02-03 05:38:11.769071 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-03 05:38:11.769090 | orchestrator | Tuesday 03 February 2026 05:37:50 +0000 (0:00:00.074) 0:00:15.874 ****** 2026-02-03 05:38:11.769109 | orchestrator | 2026-02-03 05:38:11.769129 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-03 05:38:11.769149 | orchestrator | Tuesday 03 February 2026 05:37:50 +0000 (0:00:00.086) 0:00:15.960 ****** 2026-02-03 05:38:11.769168 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:38:11.769187 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:38:11.769205 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:38:11.769225 | orchestrator | 2026-02-03 05:38:11.769244 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-03 05:38:11.769263 | orchestrator | Tuesday 03 February 2026 05:38:00 +0000 (0:00:10.040) 0:00:26.001 ****** 2026-02-03 05:38:11.769283 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:38:11.769303 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:38:11.769323 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:38:11.769342 | orchestrator | 2026-02-03 05:38:11.769361 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:38:11.769381 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 05:38:11.769403 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 05:38:11.769423 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 05:38:11.769442 | orchestrator | 2026-02-03 05:38:11.769460 | orchestrator | 2026-02-03 05:38:11.769480 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:38:11.769497 | orchestrator | Tuesday 03 February 2026 05:38:11 +0000 (0:00:10.933) 0:00:36.935 ****** 2026-02-03 05:38:11.769518 | orchestrator | =============================================================================== 2026-02-03 05:38:11.769537 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.93s 2026-02-03 05:38:11.769556 | orchestrator | redis : Restart redis container ---------------------------------------- 10.04s 2026-02-03 05:38:11.769576 | orchestrator | redis : Copying over redis config files --------------------------------- 3.12s 2026-02-03 05:38:11.769595 | orchestrator | redis : Copying over default config.json files -------------------------- 2.29s 2026-02-03 05:38:11.769613 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.25s 2026-02-03 05:38:11.769633 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.63s 2026-02-03 05:38:11.769654 | orchestrator | redis : include_tasks --------------------------------------------------- 1.24s 2026-02-03 05:38:11.769723 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.12s 2026-02-03 05:38:11.769738 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2026-02-03 05:38:11.769762 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2026-02-03 05:38:11.769773 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.64s 2026-02-03 05:38:11.769833 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.26s 2026-02-03 05:38:12.207260 | orchestrator | + osism apply -a upgrade mariadb 2026-02-03 05:38:14.397833 | orchestrator | 2026-02-03 05:38:14 | INFO  | Task e3988779-8926-4f93-a2de-0ec6df563918 (mariadb) was prepared for execution. 2026-02-03 05:38:14.397928 | orchestrator | 2026-02-03 05:38:14 | INFO  | It takes a moment until task e3988779-8926-4f93-a2de-0ec6df563918 (mariadb) has been started and output is visible here. 2026-02-03 05:38:41.906362 | orchestrator | 2026-02-03 05:38:41.906473 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 05:38:41.906489 | orchestrator | 2026-02-03 05:38:41.906501 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 05:38:41.906513 | orchestrator | Tuesday 03 February 2026 05:38:20 +0000 (0:00:01.663) 0:00:01.663 ****** 2026-02-03 05:38:41.906524 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:38:41.906536 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:38:41.906547 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:38:41.906558 | orchestrator | 2026-02-03 05:38:41.906569 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 05:38:41.906580 | orchestrator | Tuesday 03 February 2026 05:38:22 +0000 (0:00:01.974) 0:00:03.638 ****** 2026-02-03 05:38:41.906591 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-03 05:38:41.906602 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-03 05:38:41.906613 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-03 05:38:41.906624 | orchestrator | 2026-02-03 05:38:41.906635 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-03 05:38:41.906646 | orchestrator | 2026-02-03 05:38:41.906657 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-03 05:38:41.906668 | orchestrator | Tuesday 03 February 2026 05:38:24 +0000 (0:00:02.017) 0:00:05.656 ****** 2026-02-03 05:38:41.906679 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 05:38:41.906690 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-03 05:38:41.906701 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-03 05:38:41.906711 | orchestrator | 2026-02-03 05:38:41.906722 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-03 05:38:41.906733 | orchestrator | Tuesday 03 February 2026 05:38:26 +0000 (0:00:01.649) 0:00:07.306 ****** 2026-02-03 05:38:41.906744 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:38:41.906756 | orchestrator | 2026-02-03 05:38:41.906767 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-03 05:38:41.906835 | orchestrator | Tuesday 03 February 2026 05:38:28 +0000 (0:00:01.986) 0:00:09.292 ****** 2026-02-03 05:38:41.906879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 05:38:41.906946 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 05:38:41.906968 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 05:38:41.906987 | orchestrator | 2026-02-03 05:38:41.906998 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-03 05:38:41.907010 | orchestrator | Tuesday 03 February 2026 05:38:32 +0000 (0:00:04.259) 0:00:13.551 ****** 2026-02-03 05:38:41.907021 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:38:41.907032 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:38:41.907043 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:38:41.907054 | orchestrator | 2026-02-03 05:38:41.907065 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-03 05:38:41.907076 | orchestrator | Tuesday 03 February 2026 05:38:34 +0000 (0:00:01.699) 0:00:15.251 ****** 2026-02-03 05:38:41.907087 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:38:41.907098 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:38:41.907109 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:38:41.907120 | orchestrator | 2026-02-03 05:38:41.907133 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-03 05:38:41.907152 | orchestrator | Tuesday 03 February 2026 05:38:36 +0000 (0:00:02.305) 0:00:17.557 ****** 2026-02-03 05:38:41.907190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 05:38:55.350174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 05:38:55.350270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 05:38:55.350278 | orchestrator | 2026-02-03 05:38:55.350284 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-03 05:38:55.350291 | orchestrator | Tuesday 03 February 2026 05:38:41 +0000 (0:00:05.191) 0:00:22.748 ****** 2026-02-03 05:38:55.350295 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:38:55.350301 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:38:55.350305 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:38:55.350311 | orchestrator | 2026-02-03 05:38:55.350316 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-03 05:38:55.350332 | orchestrator | Tuesday 03 February 2026 05:38:44 +0000 (0:00:02.152) 0:00:24.901 ****** 2026-02-03 05:38:55.350337 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:38:55.350341 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:38:55.350352 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:38:55.350356 | orchestrator | 2026-02-03 05:38:55.350361 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-03 05:38:55.350366 | orchestrator | Tuesday 03 February 2026 05:38:49 +0000 (0:00:05.305) 0:00:30.207 ****** 2026-02-03 05:38:55.350371 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:38:55.350376 | orchestrator | 2026-02-03 05:38:55.350380 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-03 05:38:55.350385 | orchestrator | Tuesday 03 February 2026 05:38:51 +0000 (0:00:02.006) 0:00:32.214 ****** 2026-02-03 05:38:55.350394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:38:55.350400 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:38:55.350408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:03.984595 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:03.984732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:03.984754 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:03.984834 | orchestrator | 2026-02-03 05:39:03.984853 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-03 05:39:03.984865 | orchestrator | Tuesday 03 February 2026 05:38:55 +0000 (0:00:03.978) 0:00:36.192 ****** 2026-02-03 05:39:03.984879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:03.984914 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:03.984960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:03.984984 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:03.985005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:03.985040 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:03.985053 | orchestrator | 2026-02-03 05:39:03.985064 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-03 05:39:03.985075 | orchestrator | Tuesday 03 February 2026 05:38:59 +0000 (0:00:03.911) 0:00:40.104 ****** 2026-02-03 05:39:03.985104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:08.764201 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:08.764309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:08.764353 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:08.764381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:08.764392 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:08.764403 | orchestrator | 2026-02-03 05:39:08.764413 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-03 05:39:08.764425 | orchestrator | Tuesday 03 February 2026 05:39:03 +0000 (0:00:04.724) 0:00:44.828 ****** 2026-02-03 05:39:08.764453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 05:39:08.764481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 05:39:08.764503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-03 05:39:24.915625 | orchestrator | 2026-02-03 05:39:24.915849 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-03 05:39:24.915872 | orchestrator | Tuesday 03 February 2026 05:39:08 +0000 (0:00:04.776) 0:00:49.605 ****** 2026-02-03 05:39:24.915884 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:39:24.915895 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:39:24.915905 | orchestrator | } 2026-02-03 05:39:24.915915 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:39:24.915925 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:39:24.915935 | orchestrator | } 2026-02-03 05:39:24.915944 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:39:24.915954 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:39:24.915963 | orchestrator | } 2026-02-03 05:39:24.915973 | orchestrator | 2026-02-03 05:39:24.915983 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:39:24.915993 | orchestrator | Tuesday 03 February 2026 05:39:10 +0000 (0:00:01.468) 0:00:51.074 ****** 2026-02-03 05:39:24.916021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:24.916036 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:24.916068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:24.916093 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:24.916108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:24.916120 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:24.916132 | orchestrator | 2026-02-03 05:39:24.916144 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-03 05:39:24.916155 | orchestrator | Tuesday 03 February 2026 05:39:14 +0000 (0:00:04.473) 0:00:55.547 ****** 2026-02-03 05:39:24.916166 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:24.916178 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:24.916189 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:24.916200 | orchestrator | 2026-02-03 05:39:24.916211 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-03 05:39:24.916222 | orchestrator | Tuesday 03 February 2026 05:39:16 +0000 (0:00:01.544) 0:00:57.092 ****** 2026-02-03 05:39:24.916234 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:24.916246 | orchestrator | 2026-02-03 05:39:24.916257 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-03 05:39:24.916268 | orchestrator | Tuesday 03 February 2026 05:39:17 +0000 (0:00:01.163) 0:00:58.256 ****** 2026-02-03 05:39:24.916279 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:24.916290 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:24.916301 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:24.916313 | orchestrator | 2026-02-03 05:39:24.916324 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-03 05:39:24.916343 | orchestrator | Tuesday 03 February 2026 05:39:18 +0000 (0:00:01.480) 0:00:59.736 ****** 2026-02-03 05:39:24.916355 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:24.916366 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:24.916377 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:24.916388 | orchestrator | 2026-02-03 05:39:24.916400 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-03 05:39:24.916417 | orchestrator | Tuesday 03 February 2026 05:39:20 +0000 (0:00:01.699) 0:01:01.436 ****** 2026-02-03 05:39:24.916435 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:24.916453 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:24.916470 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:24.916487 | orchestrator | 2026-02-03 05:39:24.916505 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-03 05:39:24.916522 | orchestrator | Tuesday 03 February 2026 05:39:22 +0000 (0:00:01.481) 0:01:02.918 ****** 2026-02-03 05:39:24.916539 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:24.916557 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:24.916573 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:24.916590 | orchestrator | 2026-02-03 05:39:24.916606 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-03 05:39:24.916623 | orchestrator | Tuesday 03 February 2026 05:39:23 +0000 (0:00:01.342) 0:01:04.261 ****** 2026-02-03 05:39:24.916640 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:24.916658 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:24.916675 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:24.916692 | orchestrator | 2026-02-03 05:39:24.916725 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-03 05:39:44.470724 | orchestrator | Tuesday 03 February 2026 05:39:24 +0000 (0:00:01.493) 0:01:05.755 ****** 2026-02-03 05:39:44.470880 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.470894 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.470904 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.470913 | orchestrator | 2026-02-03 05:39:44.470923 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-03 05:39:44.470932 | orchestrator | Tuesday 03 February 2026 05:39:26 +0000 (0:00:01.782) 0:01:07.538 ****** 2026-02-03 05:39:44.470941 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 05:39:44.470950 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 05:39:44.470959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 05:39:44.470968 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.470976 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 05:39:44.470985 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 05:39:44.470994 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 05:39:44.471002 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471011 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 05:39:44.471020 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 05:39:44.471028 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 05:39:44.471037 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.471046 | orchestrator | 2026-02-03 05:39:44.471055 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-03 05:39:44.471064 | orchestrator | Tuesday 03 February 2026 05:39:28 +0000 (0:00:01.491) 0:01:09.030 ****** 2026-02-03 05:39:44.471073 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.471081 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471090 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.471099 | orchestrator | 2026-02-03 05:39:44.471108 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-03 05:39:44.471116 | orchestrator | Tuesday 03 February 2026 05:39:29 +0000 (0:00:01.414) 0:01:10.444 ****** 2026-02-03 05:39:44.471144 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.471153 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471162 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.471170 | orchestrator | 2026-02-03 05:39:44.471179 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-03 05:39:44.471188 | orchestrator | Tuesday 03 February 2026 05:39:31 +0000 (0:00:01.455) 0:01:11.899 ****** 2026-02-03 05:39:44.471196 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.471205 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471214 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.471222 | orchestrator | 2026-02-03 05:39:44.471231 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-03 05:39:44.471240 | orchestrator | Tuesday 03 February 2026 05:39:32 +0000 (0:00:01.465) 0:01:13.364 ****** 2026-02-03 05:39:44.471249 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.471269 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471280 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.471291 | orchestrator | 2026-02-03 05:39:44.471302 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-03 05:39:44.471311 | orchestrator | Tuesday 03 February 2026 05:39:33 +0000 (0:00:01.443) 0:01:14.808 ****** 2026-02-03 05:39:44.471321 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.471332 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471343 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.471353 | orchestrator | 2026-02-03 05:39:44.471363 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-03 05:39:44.471373 | orchestrator | Tuesday 03 February 2026 05:39:35 +0000 (0:00:01.580) 0:01:16.388 ****** 2026-02-03 05:39:44.471383 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.471393 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471404 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.471413 | orchestrator | 2026-02-03 05:39:44.471423 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-03 05:39:44.471433 | orchestrator | Tuesday 03 February 2026 05:39:37 +0000 (0:00:01.696) 0:01:18.084 ****** 2026-02-03 05:39:44.471443 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.471452 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471462 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.471472 | orchestrator | 2026-02-03 05:39:44.471482 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-03 05:39:44.471492 | orchestrator | Tuesday 03 February 2026 05:39:38 +0000 (0:00:01.568) 0:01:19.653 ****** 2026-02-03 05:39:44.471502 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.471511 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471521 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:39:44.471532 | orchestrator | 2026-02-03 05:39:44.471542 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-03 05:39:44.471552 | orchestrator | Tuesday 03 February 2026 05:39:40 +0000 (0:00:01.583) 0:01:21.236 ****** 2026-02-03 05:39:44.471585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:44.471607 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:39:44.471623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:39:44.471633 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:39:44.471649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:40:02.877614 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:40:02.877708 | orchestrator | 2026-02-03 05:40:02.877720 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-03 05:40:02.877729 | orchestrator | Tuesday 03 February 2026 05:39:44 +0000 (0:00:04.072) 0:01:25.309 ****** 2026-02-03 05:40:02.877737 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:40:02.877744 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:40:02.877753 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:40:02.877804 | orchestrator | 2026-02-03 05:40:02.877813 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-03 05:40:02.877821 | orchestrator | Tuesday 03 February 2026 05:39:46 +0000 (0:00:01.808) 0:01:27.117 ****** 2026-02-03 05:40:02.877847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:40:02.877859 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:40:02.877894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:40:02.877933 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:40:02.877946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-03 05:40:02.877954 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:40:02.877962 | orchestrator | 2026-02-03 05:40:02.877969 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-03 05:40:02.877976 | orchestrator | Tuesday 03 February 2026 05:39:49 +0000 (0:00:03.638) 0:01:30.756 ****** 2026-02-03 05:40:02.877984 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:40:02.877991 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:40:02.877998 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:40:02.878005 | orchestrator | 2026-02-03 05:40:02.878050 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-03 05:40:02.878058 | orchestrator | Tuesday 03 February 2026 05:39:51 +0000 (0:00:01.913) 0:01:32.670 ****** 2026-02-03 05:40:02.878066 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:40:02.878073 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:40:02.878080 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:40:02.878093 | orchestrator | 2026-02-03 05:40:02.878101 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-03 05:40:02.878109 | orchestrator | Tuesday 03 February 2026 05:39:53 +0000 (0:00:01.495) 0:01:34.166 ****** 2026-02-03 05:40:02.878116 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:40:02.878123 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:40:02.878130 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:40:02.878138 | orchestrator | 2026-02-03 05:40:02.878145 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-03 05:40:02.878152 | orchestrator | Tuesday 03 February 2026 05:39:54 +0000 (0:00:01.452) 0:01:35.618 ****** 2026-02-03 05:40:02.878161 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:40:02.878170 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:40:02.878179 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:40:02.878187 | orchestrator | 2026-02-03 05:40:02.878196 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-03 05:40:02.878205 | orchestrator | Tuesday 03 February 2026 05:39:56 +0000 (0:00:02.083) 0:01:37.702 ****** 2026-02-03 05:40:02.878213 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:40:02.878222 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:40:02.878231 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:40:02.878239 | orchestrator | 2026-02-03 05:40:02.878246 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-03 05:40:02.878253 | orchestrator | Tuesday 03 February 2026 05:39:58 +0000 (0:00:02.089) 0:01:39.792 ****** 2026-02-03 05:40:02.878261 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:40:02.878268 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:40:02.878276 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:40:02.878283 | orchestrator | 2026-02-03 05:40:02.878290 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-03 05:40:02.878297 | orchestrator | Tuesday 03 February 2026 05:40:01 +0000 (0:00:02.095) 0:01:41.888 ****** 2026-02-03 05:40:02.878304 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:40:02.878312 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:40:02.878319 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:40:02.878326 | orchestrator | 2026-02-03 05:40:02.878334 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-03 05:40:02.878341 | orchestrator | Tuesday 03 February 2026 05:40:02 +0000 (0:00:01.582) 0:01:43.471 ****** 2026-02-03 05:40:02.878354 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.602779 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.602921 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.602936 | orchestrator | 2026-02-03 05:42:53.602946 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-03 05:42:53.602956 | orchestrator | Tuesday 03 February 2026 05:40:04 +0000 (0:00:01.531) 0:01:45.002 ****** 2026-02-03 05:42:53.602964 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.602972 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.602980 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.602988 | orchestrator | 2026-02-03 05:42:53.602997 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-03 05:42:53.603005 | orchestrator | Tuesday 03 February 2026 05:40:06 +0000 (0:00:02.215) 0:01:47.217 ****** 2026-02-03 05:42:53.603018 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.603032 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.603047 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.603061 | orchestrator | 2026-02-03 05:42:53.603075 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-03 05:42:53.603089 | orchestrator | Tuesday 03 February 2026 05:40:07 +0000 (0:00:01.472) 0:01:48.689 ****** 2026-02-03 05:42:53.603102 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:42:53.603117 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.603131 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.603144 | orchestrator | 2026-02-03 05:42:53.603160 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-03 05:42:53.603193 | orchestrator | Tuesday 03 February 2026 05:40:09 +0000 (0:00:01.672) 0:01:50.362 ****** 2026-02-03 05:42:53.603202 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.603210 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.603218 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.603231 | orchestrator | 2026-02-03 05:42:53.603244 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-03 05:42:53.603266 | orchestrator | Tuesday 03 February 2026 05:40:13 +0000 (0:00:03.887) 0:01:54.250 ****** 2026-02-03 05:42:53.603281 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.603293 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.603305 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.603318 | orchestrator | 2026-02-03 05:42:53.603331 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-03 05:42:53.603345 | orchestrator | Tuesday 03 February 2026 05:40:14 +0000 (0:00:01.502) 0:01:55.753 ****** 2026-02-03 05:42:53.603358 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.603370 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.603383 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.603397 | orchestrator | 2026-02-03 05:42:53.603413 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-03 05:42:53.603427 | orchestrator | Tuesday 03 February 2026 05:40:16 +0000 (0:00:01.537) 0:01:57.290 ****** 2026-02-03 05:42:53.603442 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:42:53.603454 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.603466 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.603476 | orchestrator | 2026-02-03 05:42:53.603486 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-03 05:42:53.603496 | orchestrator | Tuesday 03 February 2026 05:40:18 +0000 (0:00:01.894) 0:01:59.185 ****** 2026-02-03 05:42:53.603504 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:42:53.603514 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.603523 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.603532 | orchestrator | 2026-02-03 05:42:53.603542 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-03 05:42:53.603551 | orchestrator | Tuesday 03 February 2026 05:40:19 +0000 (0:00:01.627) 0:02:00.813 ****** 2026-02-03 05:42:53.603560 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:42:53.603570 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.603579 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.603587 | orchestrator | 2026-02-03 05:42:53.603597 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-03 05:42:53.603606 | orchestrator | Tuesday 03 February 2026 05:40:21 +0000 (0:00:01.796) 0:02:02.609 ****** 2026-02-03 05:42:53.603615 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:42:53.603624 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:42:53.603634 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:42:53.603644 | orchestrator | 2026-02-03 05:42:53.603652 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-03 05:42:53.603662 | orchestrator | Tuesday 03 February 2026 05:40:23 +0000 (0:00:01.731) 0:02:04.341 ****** 2026-02-03 05:42:53.603671 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:42:53.603680 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.603689 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.603698 | orchestrator | 2026-02-03 05:42:53.603708 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-03 05:42:53.603717 | orchestrator | 2026-02-03 05:42:53.603726 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-03 05:42:53.603734 | orchestrator | Tuesday 03 February 2026 05:40:25 +0000 (0:00:02.268) 0:02:06.610 ****** 2026-02-03 05:42:53.603741 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:42:53.603749 | orchestrator | 2026-02-03 05:42:53.603757 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-03 05:42:53.603765 | orchestrator | Tuesday 03 February 2026 05:40:54 +0000 (0:00:28.307) 0:02:34.918 ****** 2026-02-03 05:42:53.603788 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.603802 | orchestrator | 2026-02-03 05:42:53.603814 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-03 05:42:53.603828 | orchestrator | Tuesday 03 February 2026 05:41:01 +0000 (0:00:07.662) 0:02:42.580 ****** 2026-02-03 05:42:53.603841 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.603877 | orchestrator | 2026-02-03 05:42:53.603889 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-03 05:42:53.603903 | orchestrator | 2026-02-03 05:42:53.603916 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-03 05:42:53.603930 | orchestrator | Tuesday 03 February 2026 05:41:05 +0000 (0:00:03.281) 0:02:45.862 ****** 2026-02-03 05:42:53.603944 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:42:53.603957 | orchestrator | 2026-02-03 05:42:53.603970 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-03 05:42:53.603996 | orchestrator | Tuesday 03 February 2026 05:41:33 +0000 (0:00:28.036) 0:03:13.898 ****** 2026-02-03 05:42:53.604005 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.604013 | orchestrator | 2026-02-03 05:42:53.604021 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-03 05:42:53.604029 | orchestrator | Tuesday 03 February 2026 05:41:37 +0000 (0:00:04.649) 0:03:18.547 ****** 2026-02-03 05:42:53.604036 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.604044 | orchestrator | 2026-02-03 05:42:53.604052 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-03 05:42:53.604060 | orchestrator | 2026-02-03 05:42:53.604068 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-03 05:42:53.604075 | orchestrator | Tuesday 03 February 2026 05:41:40 +0000 (0:00:03.225) 0:03:21.773 ****** 2026-02-03 05:42:53.604083 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:42:53.604091 | orchestrator | 2026-02-03 05:42:53.604099 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-03 05:42:53.604107 | orchestrator | Tuesday 03 February 2026 05:42:09 +0000 (0:00:28.085) 0:03:49.858 ****** 2026-02-03 05:42:53.604114 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.604122 | orchestrator | 2026-02-03 05:42:53.604130 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-03 05:42:53.604138 | orchestrator | Tuesday 03 February 2026 05:42:14 +0000 (0:00:05.378) 0:03:55.237 ****** 2026-02-03 05:42:53.604146 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-03 05:42:53.604154 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-03 05:42:53.604162 | orchestrator | mariadb_bootstrap_restart 2026-02-03 05:42:53.604170 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.604178 | orchestrator | 2026-02-03 05:42:53.604186 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-03 05:42:53.604199 | orchestrator | skipping: no hosts matched 2026-02-03 05:42:53.604211 | orchestrator | 2026-02-03 05:42:53.604224 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-03 05:42:53.604236 | orchestrator | skipping: no hosts matched 2026-02-03 05:42:53.604249 | orchestrator | 2026-02-03 05:42:53.604261 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-03 05:42:53.604271 | orchestrator | 2026-02-03 05:42:53.604284 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-03 05:42:53.604297 | orchestrator | Tuesday 03 February 2026 05:42:18 +0000 (0:00:04.303) 0:03:59.540 ****** 2026-02-03 05:42:53.604310 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:42:53.604323 | orchestrator | 2026-02-03 05:42:53.604334 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-03 05:42:53.604346 | orchestrator | Tuesday 03 February 2026 05:42:20 +0000 (0:00:02.048) 0:04:01.589 ****** 2026-02-03 05:42:53.604358 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.604384 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.604397 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.604411 | orchestrator | 2026-02-03 05:42:53.604424 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-03 05:42:53.604438 | orchestrator | Tuesday 03 February 2026 05:42:24 +0000 (0:00:03.498) 0:04:05.087 ****** 2026-02-03 05:42:53.604451 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.604471 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.604486 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:42:53.604498 | orchestrator | 2026-02-03 05:42:53.604511 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-03 05:42:53.604523 | orchestrator | Tuesday 03 February 2026 05:42:27 +0000 (0:00:03.638) 0:04:08.726 ****** 2026-02-03 05:42:53.604535 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.604547 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.604558 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.604570 | orchestrator | 2026-02-03 05:42:53.604582 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-03 05:42:53.604594 | orchestrator | Tuesday 03 February 2026 05:42:31 +0000 (0:00:03.549) 0:04:12.276 ****** 2026-02-03 05:42:53.604606 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.604619 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.604632 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:42:53.604646 | orchestrator | 2026-02-03 05:42:53.604659 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-03 05:42:53.604673 | orchestrator | Tuesday 03 February 2026 05:42:35 +0000 (0:00:03.617) 0:04:15.893 ****** 2026-02-03 05:42:53.604682 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.604690 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.604698 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.604706 | orchestrator | 2026-02-03 05:42:53.604714 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-03 05:42:53.604722 | orchestrator | Tuesday 03 February 2026 05:42:42 +0000 (0:00:07.015) 0:04:22.909 ****** 2026-02-03 05:42:53.604730 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:42:53.604738 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.604746 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.604754 | orchestrator | 2026-02-03 05:42:53.604762 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-03 05:42:53.604769 | orchestrator | Tuesday 03 February 2026 05:42:45 +0000 (0:00:03.855) 0:04:26.764 ****** 2026-02-03 05:42:53.604779 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:42:53.604793 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:42:53.604806 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:42:53.604819 | orchestrator | 2026-02-03 05:42:53.604831 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-03 05:42:53.604875 | orchestrator | Tuesday 03 February 2026 05:42:47 +0000 (0:00:01.728) 0:04:28.492 ****** 2026-02-03 05:42:53.604890 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:42:53.604903 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:42:53.604916 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:42:53.604930 | orchestrator | 2026-02-03 05:42:53.604945 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-03 05:42:53.604957 | orchestrator | Tuesday 03 February 2026 05:42:51 +0000 (0:00:03.821) 0:04:32.314 ****** 2026-02-03 05:42:53.604984 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:43:13.989838 | orchestrator | 2026-02-03 05:43:13.990071 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-03 05:43:13.990094 | orchestrator | Tuesday 03 February 2026 05:42:53 +0000 (0:00:02.125) 0:04:34.439 ****** 2026-02-03 05:43:13.990106 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:43:13.990117 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:43:13.990128 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:43:13.990139 | orchestrator | 2026-02-03 05:43:13.990181 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:43:13.990194 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-03 05:43:13.990206 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-03 05:43:13.990218 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-03 05:43:13.990228 | orchestrator | 2026-02-03 05:43:13.990239 | orchestrator | 2026-02-03 05:43:13.990250 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:43:13.990261 | orchestrator | Tuesday 03 February 2026 05:43:13 +0000 (0:00:19.805) 0:04:54.244 ****** 2026-02-03 05:43:13.990272 | orchestrator | =============================================================================== 2026-02-03 05:43:13.990283 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 84.43s 2026-02-03 05:43:13.990308 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 19.81s 2026-02-03 05:43:13.990319 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 17.69s 2026-02-03 05:43:13.990331 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.81s 2026-02-03 05:43:13.990342 | orchestrator | service-check : mariadb | Get container facts --------------------------- 7.02s 2026-02-03 05:43:13.990353 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.31s 2026-02-03 05:43:13.990363 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.19s 2026-02-03 05:43:13.990374 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.78s 2026-02-03 05:43:13.990387 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.72s 2026-02-03 05:43:13.990400 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.47s 2026-02-03 05:43:13.990412 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.26s 2026-02-03 05:43:13.990425 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 4.07s 2026-02-03 05:43:13.990438 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.98s 2026-02-03 05:43:13.990449 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.91s 2026-02-03 05:43:13.990461 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.89s 2026-02-03 05:43:13.990471 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.86s 2026-02-03 05:43:13.990482 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.82s 2026-02-03 05:43:13.990493 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.64s 2026-02-03 05:43:13.990503 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.64s 2026-02-03 05:43:13.990514 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.62s 2026-02-03 05:43:14.441341 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-03 05:43:16.917208 | orchestrator | 2026-02-03 05:43:16 | INFO  | Task e1311fb8-25e6-4e5f-9372-398ceb9377a5 (rabbitmq) was prepared for execution. 2026-02-03 05:43:16.917319 | orchestrator | 2026-02-03 05:43:16 | INFO  | It takes a moment until task e1311fb8-25e6-4e5f-9372-398ceb9377a5 (rabbitmq) has been started and output is visible here. 2026-02-03 05:44:04.161264 | orchestrator | 2026-02-03 05:44:04.161382 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 05:44:04.161398 | orchestrator | 2026-02-03 05:44:04.161410 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 05:44:04.161420 | orchestrator | Tuesday 03 February 2026 05:43:23 +0000 (0:00:01.802) 0:00:01.802 ****** 2026-02-03 05:44:04.161465 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:44:04.161486 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:44:04.161503 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:44:04.161520 | orchestrator | 2026-02-03 05:44:04.161538 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 05:44:04.161555 | orchestrator | Tuesday 03 February 2026 05:43:25 +0000 (0:00:01.880) 0:00:03.683 ****** 2026-02-03 05:44:04.161571 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-03 05:44:04.161589 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-03 05:44:04.161608 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-03 05:44:04.161626 | orchestrator | 2026-02-03 05:44:04.161644 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-03 05:44:04.161662 | orchestrator | 2026-02-03 05:44:04.161678 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-03 05:44:04.161688 | orchestrator | Tuesday 03 February 2026 05:43:27 +0000 (0:00:02.069) 0:00:05.753 ****** 2026-02-03 05:44:04.161698 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:44:04.161709 | orchestrator | 2026-02-03 05:44:04.161719 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-03 05:44:04.161729 | orchestrator | Tuesday 03 February 2026 05:43:30 +0000 (0:00:02.959) 0:00:08.712 ****** 2026-02-03 05:44:04.161738 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:44:04.161748 | orchestrator | 2026-02-03 05:44:04.161758 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-03 05:44:04.161768 | orchestrator | Tuesday 03 February 2026 05:43:33 +0000 (0:00:02.685) 0:00:11.398 ****** 2026-02-03 05:44:04.161778 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:44:04.161788 | orchestrator | 2026-02-03 05:44:04.161800 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-03 05:44:04.161812 | orchestrator | Tuesday 03 February 2026 05:43:36 +0000 (0:00:03.606) 0:00:15.005 ****** 2026-02-03 05:44:04.161824 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:44:04.161836 | orchestrator | 2026-02-03 05:44:04.161848 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-03 05:44:04.161859 | orchestrator | Tuesday 03 February 2026 05:43:46 +0000 (0:00:10.255) 0:00:25.260 ****** 2026-02-03 05:44:04.161871 | orchestrator | ok: [testbed-node-0] => { 2026-02-03 05:44:04.161883 | orchestrator |  "changed": false, 2026-02-03 05:44:04.161940 | orchestrator |  "msg": "All assertions passed" 2026-02-03 05:44:04.161952 | orchestrator | } 2026-02-03 05:44:04.161965 | orchestrator | 2026-02-03 05:44:04.161976 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-03 05:44:04.161988 | orchestrator | Tuesday 03 February 2026 05:43:48 +0000 (0:00:01.383) 0:00:26.644 ****** 2026-02-03 05:44:04.161999 | orchestrator | ok: [testbed-node-0] => { 2026-02-03 05:44:04.162011 | orchestrator |  "changed": false, 2026-02-03 05:44:04.162069 | orchestrator |  "msg": "All assertions passed" 2026-02-03 05:44:04.162082 | orchestrator | } 2026-02-03 05:44:04.162094 | orchestrator | 2026-02-03 05:44:04.162117 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-03 05:44:04.162130 | orchestrator | Tuesday 03 February 2026 05:43:50 +0000 (0:00:01.712) 0:00:28.356 ****** 2026-02-03 05:44:04.162142 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:44:04.162153 | orchestrator | 2026-02-03 05:44:04.162163 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-03 05:44:04.162173 | orchestrator | Tuesday 03 February 2026 05:43:51 +0000 (0:00:01.920) 0:00:30.277 ****** 2026-02-03 05:44:04.162183 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:44:04.162192 | orchestrator | 2026-02-03 05:44:04.162202 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-03 05:44:04.162212 | orchestrator | Tuesday 03 February 2026 05:43:54 +0000 (0:00:02.289) 0:00:32.567 ****** 2026-02-03 05:44:04.162232 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:44:04.162242 | orchestrator | 2026-02-03 05:44:04.162252 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-03 05:44:04.162261 | orchestrator | Tuesday 03 February 2026 05:43:57 +0000 (0:00:03.337) 0:00:35.904 ****** 2026-02-03 05:44:04.162271 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:44:04.162281 | orchestrator | 2026-02-03 05:44:04.162290 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-03 05:44:04.162301 | orchestrator | Tuesday 03 February 2026 05:43:59 +0000 (0:00:01.955) 0:00:37.860 ****** 2026-02-03 05:44:04.162338 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:04.162360 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:04.162388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:04.162407 | orchestrator | 2026-02-03 05:44:04.162424 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-03 05:44:04.162453 | orchestrator | Tuesday 03 February 2026 05:44:01 +0000 (0:00:02.039) 0:00:39.899 ****** 2026-02-03 05:44:04.162472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:04.162497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:24.684672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:24.684790 | orchestrator | 2026-02-03 05:44:24.684807 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-03 05:44:24.684820 | orchestrator | Tuesday 03 February 2026 05:44:04 +0000 (0:00:02.568) 0:00:42.468 ****** 2026-02-03 05:44:24.684832 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-03 05:44:24.684844 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-03 05:44:24.684855 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-03 05:44:24.684866 | orchestrator | 2026-02-03 05:44:24.684962 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-03 05:44:24.685005 | orchestrator | Tuesday 03 February 2026 05:44:06 +0000 (0:00:02.519) 0:00:44.987 ****** 2026-02-03 05:44:24.685024 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-03 05:44:24.685035 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-03 05:44:24.685046 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-03 05:44:24.685057 | orchestrator | 2026-02-03 05:44:24.685068 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-03 05:44:24.685079 | orchestrator | Tuesday 03 February 2026 05:44:09 +0000 (0:00:03.155) 0:00:48.143 ****** 2026-02-03 05:44:24.685090 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-03 05:44:24.685100 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-03 05:44:24.685111 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-03 05:44:24.685122 | orchestrator | 2026-02-03 05:44:24.685133 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-03 05:44:24.685144 | orchestrator | Tuesday 03 February 2026 05:44:12 +0000 (0:00:02.674) 0:00:50.817 ****** 2026-02-03 05:44:24.685154 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-03 05:44:24.685165 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-03 05:44:24.685176 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-03 05:44:24.685187 | orchestrator | 2026-02-03 05:44:24.685200 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-03 05:44:24.685212 | orchestrator | Tuesday 03 February 2026 05:44:15 +0000 (0:00:02.599) 0:00:53.417 ****** 2026-02-03 05:44:24.685225 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-03 05:44:24.685237 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-03 05:44:24.685250 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-03 05:44:24.685263 | orchestrator | 2026-02-03 05:44:24.685275 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-03 05:44:24.685287 | orchestrator | Tuesday 03 February 2026 05:44:17 +0000 (0:00:02.371) 0:00:55.788 ****** 2026-02-03 05:44:24.685301 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-03 05:44:24.685313 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-03 05:44:24.685326 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-03 05:44:24.685338 | orchestrator | 2026-02-03 05:44:24.685350 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-03 05:44:24.685363 | orchestrator | Tuesday 03 February 2026 05:44:20 +0000 (0:00:02.800) 0:00:58.589 ****** 2026-02-03 05:44:24.685376 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:44:24.685389 | orchestrator | 2026-02-03 05:44:24.685421 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-03 05:44:24.685433 | orchestrator | Tuesday 03 February 2026 05:44:22 +0000 (0:00:01.839) 0:01:00.428 ****** 2026-02-03 05:44:24.685446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:24.685476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:24.685490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:24.685502 | orchestrator | 2026-02-03 05:44:24.685514 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-03 05:44:24.685525 | orchestrator | Tuesday 03 February 2026 05:44:24 +0000 (0:00:02.420) 0:01:02.848 ****** 2026-02-03 05:44:24.685545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:44:34.634431 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:44:34.634630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:44:34.634677 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:44:34.634691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:44:34.634704 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:44:34.634715 | orchestrator | 2026-02-03 05:44:34.634727 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-03 05:44:34.634740 | orchestrator | Tuesday 03 February 2026 05:44:26 +0000 (0:00:01.612) 0:01:04.461 ****** 2026-02-03 05:44:34.634751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:44:34.634789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:44:34.634829 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:44:34.634841 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:44:34.634858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:44:34.634870 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:44:34.634881 | orchestrator | 2026-02-03 05:44:34.634892 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-03 05:44:34.634903 | orchestrator | Tuesday 03 February 2026 05:44:28 +0000 (0:00:01.918) 0:01:06.380 ****** 2026-02-03 05:44:34.634941 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:44:34.634955 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:44:34.634968 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:44:34.634981 | orchestrator | 2026-02-03 05:44:34.634994 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-03 05:44:34.635007 | orchestrator | Tuesday 03 February 2026 05:44:32 +0000 (0:00:04.233) 0:01:10.613 ****** 2026-02-03 05:44:34.635018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:44:34.635047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:46:26.065512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-03 05:46:26.065632 | orchestrator | 2026-02-03 05:46:26.065651 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-03 05:46:26.065665 | orchestrator | Tuesday 03 February 2026 05:44:34 +0000 (0:00:02.332) 0:01:12.946 ****** 2026-02-03 05:46:26.065677 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:46:26.065690 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:46:26.065701 | orchestrator | } 2026-02-03 05:46:26.065712 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:46:26.065723 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:46:26.065734 | orchestrator | } 2026-02-03 05:46:26.065745 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:46:26.065756 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:46:26.065767 | orchestrator | } 2026-02-03 05:46:26.065778 | orchestrator | 2026-02-03 05:46:26.065789 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:46:26.065801 | orchestrator | Tuesday 03 February 2026 05:44:36 +0000 (0:00:01.587) 0:01:14.533 ****** 2026-02-03 05:46:26.065824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:46:26.065888 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:46:26.065914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:46:26.065934 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:46:26.066150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-03 05:46:26.066183 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:46:26.066198 | orchestrator | 2026-02-03 05:46:26.066216 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-03 05:46:26.066230 | orchestrator | Tuesday 03 February 2026 05:44:38 +0000 (0:00:02.363) 0:01:16.897 ****** 2026-02-03 05:46:26.066243 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:46:26.066256 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:46:26.066269 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:46:26.066281 | orchestrator | 2026-02-03 05:46:26.066295 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-03 05:46:26.066308 | orchestrator | 2026-02-03 05:46:26.066321 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-03 05:46:26.066334 | orchestrator | Tuesday 03 February 2026 05:44:40 +0000 (0:00:02.212) 0:01:19.110 ****** 2026-02-03 05:46:26.066347 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:46:26.066361 | orchestrator | 2026-02-03 05:46:26.066374 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-03 05:46:26.066387 | orchestrator | Tuesday 03 February 2026 05:44:42 +0000 (0:00:02.182) 0:01:21.292 ****** 2026-02-03 05:46:26.066398 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:46:26.066409 | orchestrator | 2026-02-03 05:46:26.066420 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-03 05:46:26.066431 | orchestrator | Tuesday 03 February 2026 05:44:52 +0000 (0:00:09.945) 0:01:31.238 ****** 2026-02-03 05:46:26.066455 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:46:26.066466 | orchestrator | 2026-02-03 05:46:26.066477 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-03 05:46:26.066488 | orchestrator | Tuesday 03 February 2026 05:45:02 +0000 (0:00:09.493) 0:01:40.731 ****** 2026-02-03 05:46:26.066499 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:46:26.066509 | orchestrator | 2026-02-03 05:46:26.066520 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-03 05:46:26.066531 | orchestrator | 2026-02-03 05:46:26.066542 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-03 05:46:26.066553 | orchestrator | Tuesday 03 February 2026 05:45:12 +0000 (0:00:10.023) 0:01:50.754 ****** 2026-02-03 05:46:26.066565 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:46:26.066576 | orchestrator | 2026-02-03 05:46:26.066587 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-03 05:46:26.066597 | orchestrator | Tuesday 03 February 2026 05:45:14 +0000 (0:00:01.746) 0:01:52.501 ****** 2026-02-03 05:46:26.066608 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:46:26.066619 | orchestrator | 2026-02-03 05:46:26.066630 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-03 05:46:26.066641 | orchestrator | Tuesday 03 February 2026 05:45:24 +0000 (0:00:10.153) 0:02:02.654 ****** 2026-02-03 05:46:26.066652 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:46:26.066663 | orchestrator | 2026-02-03 05:46:26.066674 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-03 05:46:26.066685 | orchestrator | Tuesday 03 February 2026 05:45:38 +0000 (0:00:14.339) 0:02:16.994 ****** 2026-02-03 05:46:26.066696 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:46:26.066707 | orchestrator | 2026-02-03 05:46:26.066717 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-03 05:46:26.066728 | orchestrator | 2026-02-03 05:46:26.066739 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-03 05:46:26.066750 | orchestrator | Tuesday 03 February 2026 05:45:48 +0000 (0:00:10.313) 0:02:27.307 ****** 2026-02-03 05:46:26.066761 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:46:26.066772 | orchestrator | 2026-02-03 05:46:26.066783 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-03 05:46:26.066794 | orchestrator | Tuesday 03 February 2026 05:45:50 +0000 (0:00:01.926) 0:02:29.234 ****** 2026-02-03 05:46:26.066805 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:46:26.066815 | orchestrator | 2026-02-03 05:46:26.066826 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-03 05:46:26.066837 | orchestrator | Tuesday 03 February 2026 05:46:00 +0000 (0:00:10.051) 0:02:39.286 ****** 2026-02-03 05:46:26.066848 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:46:26.066859 | orchestrator | 2026-02-03 05:46:26.066870 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-03 05:46:26.066881 | orchestrator | Tuesday 03 February 2026 05:46:15 +0000 (0:00:14.065) 0:02:53.351 ****** 2026-02-03 05:46:26.066891 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:46:26.066902 | orchestrator | 2026-02-03 05:46:26.066914 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-03 05:46:26.066925 | orchestrator | 2026-02-03 05:46:26.066936 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-03 05:46:26.066959 | orchestrator | Tuesday 03 February 2026 05:46:26 +0000 (0:00:11.016) 0:03:04.368 ****** 2026-02-03 05:46:32.612151 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:46:32.612265 | orchestrator | 2026-02-03 05:46:32.612281 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-03 05:46:32.612292 | orchestrator | Tuesday 03 February 2026 05:46:27 +0000 (0:00:01.558) 0:03:05.926 ****** 2026-02-03 05:46:32.612304 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:46:32.612316 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:46:32.612356 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:46:32.612367 | orchestrator | 2026-02-03 05:46:32.612379 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:46:32.612391 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 05:46:32.612403 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 05:46:32.612414 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-03 05:46:32.612425 | orchestrator | 2026-02-03 05:46:32.612436 | orchestrator | 2026-02-03 05:46:32.612462 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:46:32.612474 | orchestrator | Tuesday 03 February 2026 05:46:32 +0000 (0:00:04.481) 0:03:10.408 ****** 2026-02-03 05:46:32.612485 | orchestrator | =============================================================================== 2026-02-03 05:46:32.612496 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.90s 2026-02-03 05:46:32.612507 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 31.35s 2026-02-03 05:46:32.612517 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 30.15s 2026-02-03 05:46:32.612528 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------ 10.26s 2026-02-03 05:46:32.612539 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.86s 2026-02-03 05:46:32.612550 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.48s 2026-02-03 05:46:32.612561 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.23s 2026-02-03 05:46:32.612572 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.61s 2026-02-03 05:46:32.612582 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.34s 2026-02-03 05:46:32.612593 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.16s 2026-02-03 05:46:32.612604 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.96s 2026-02-03 05:46:32.612615 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.80s 2026-02-03 05:46:32.612625 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.69s 2026-02-03 05:46:32.612636 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.67s 2026-02-03 05:46:32.612647 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.60s 2026-02-03 05:46:32.612658 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.57s 2026-02-03 05:46:32.612668 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.52s 2026-02-03 05:46:32.612679 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.42s 2026-02-03 05:46:32.612692 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.37s 2026-02-03 05:46:32.612705 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.36s 2026-02-03 05:46:32.996144 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-03 05:46:35.289513 | orchestrator | 2026-02-03 05:46:35 | INFO  | Task 7a1458dd-9f27-4f04-964e-6bc69c32af91 (openvswitch) was prepared for execution. 2026-02-03 05:46:35.289639 | orchestrator | 2026-02-03 05:46:35 | INFO  | It takes a moment until task 7a1458dd-9f27-4f04-964e-6bc69c32af91 (openvswitch) has been started and output is visible here. 2026-02-03 05:47:05.204875 | orchestrator | 2026-02-03 05:47:05.205062 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 05:47:05.205085 | orchestrator | 2026-02-03 05:47:05.205097 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 05:47:05.205136 | orchestrator | Tuesday 03 February 2026 05:46:42 +0000 (0:00:02.260) 0:00:02.260 ****** 2026-02-03 05:47:05.205149 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:47:05.205161 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:47:05.205172 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:47:05.205183 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:47:05.205195 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:47:05.205206 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:47:05.205216 | orchestrator | 2026-02-03 05:47:05.205228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 05:47:05.205239 | orchestrator | Tuesday 03 February 2026 05:46:45 +0000 (0:00:03.111) 0:00:05.371 ****** 2026-02-03 05:47:05.205250 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 05:47:05.205262 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 05:47:05.205273 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 05:47:05.205283 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 05:47:05.205294 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 05:47:05.205305 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-03 05:47:05.205316 | orchestrator | 2026-02-03 05:47:05.205327 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-03 05:47:05.205338 | orchestrator | 2026-02-03 05:47:05.205349 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-03 05:47:05.205360 | orchestrator | Tuesday 03 February 2026 05:46:47 +0000 (0:00:02.080) 0:00:07.452 ****** 2026-02-03 05:47:05.205372 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 05:47:05.205387 | orchestrator | 2026-02-03 05:47:05.205401 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-03 05:47:05.205414 | orchestrator | Tuesday 03 February 2026 05:46:50 +0000 (0:00:03.017) 0:00:10.469 ****** 2026-02-03 05:47:05.205427 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-03 05:47:05.205441 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-03 05:47:05.205453 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-03 05:47:05.205467 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-03 05:47:05.205494 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-03 05:47:05.205507 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-03 05:47:05.205519 | orchestrator | 2026-02-03 05:47:05.205535 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-03 05:47:05.205553 | orchestrator | Tuesday 03 February 2026 05:46:53 +0000 (0:00:02.773) 0:00:13.243 ****** 2026-02-03 05:47:05.205573 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-03 05:47:05.205592 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-03 05:47:05.205609 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-03 05:47:05.205623 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-03 05:47:05.205637 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-03 05:47:05.205649 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-03 05:47:05.205663 | orchestrator | 2026-02-03 05:47:05.205682 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-03 05:47:05.205698 | orchestrator | Tuesday 03 February 2026 05:46:56 +0000 (0:00:03.293) 0:00:16.537 ****** 2026-02-03 05:47:05.205716 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-03 05:47:05.205772 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:47:05.205790 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-03 05:47:05.205804 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:47:05.205841 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-03 05:47:05.205860 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:47:05.205879 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-03 05:47:05.205899 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:47:05.205918 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-03 05:47:05.205936 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:47:05.205947 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-03 05:47:05.205961 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:47:05.205979 | orchestrator | 2026-02-03 05:47:05.206088 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-03 05:47:05.206101 | orchestrator | Tuesday 03 February 2026 05:46:59 +0000 (0:00:02.876) 0:00:19.414 ****** 2026-02-03 05:47:05.206112 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:47:05.206123 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:47:05.206133 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:47:05.206144 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:47:05.206165 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:47:05.206176 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:47:05.206187 | orchestrator | 2026-02-03 05:47:05.206197 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-03 05:47:05.206208 | orchestrator | Tuesday 03 February 2026 05:47:02 +0000 (0:00:02.645) 0:00:22.060 ****** 2026-02-03 05:47:05.206247 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:05.206267 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:05.206279 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:05.206299 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:05.206325 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:05.206337 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:05.206357 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:10.082933 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083134 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083201 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083224 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083246 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083266 | orchestrator | 2026-02-03 05:47:10.083288 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-03 05:47:10.083309 | orchestrator | Tuesday 03 February 2026 05:47:05 +0000 (0:00:03.034) 0:00:25.094 ****** 2026-02-03 05:47:10.083357 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083381 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083421 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083442 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083462 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083481 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:10.083517 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:14.979948 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980113 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980133 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980146 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980159 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980171 | orchestrator | 2026-02-03 05:47:14.980184 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-03 05:47:14.980197 | orchestrator | Tuesday 03 February 2026 05:47:10 +0000 (0:00:04.891) 0:00:29.985 ****** 2026-02-03 05:47:14.980208 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:47:14.980220 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:47:14.980231 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:47:14.980242 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:47:14.980252 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:47:14.980263 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:47:14.980274 | orchestrator | 2026-02-03 05:47:14.980285 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-03 05:47:14.980323 | orchestrator | Tuesday 03 February 2026 05:47:13 +0000 (0:00:02.971) 0:00:32.957 ****** 2026-02-03 05:47:14.980342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:14.980426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:20.471707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:20.471840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:20.471859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:20.471913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:20.471928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-03 05:47:20.471983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-03 05:47:20.472054 | orchestrator | 2026-02-03 05:47:20.472077 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-03 05:47:20.472092 | orchestrator | Tuesday 03 February 2026 05:47:17 +0000 (0:00:04.651) 0:00:37.608 ****** 2026-02-03 05:47:20.472105 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:47:20.472117 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:47:20.472128 | orchestrator | } 2026-02-03 05:47:20.472140 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:47:20.472150 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:47:20.472161 | orchestrator | } 2026-02-03 05:47:20.472172 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:47:20.472183 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:47:20.472194 | orchestrator | } 2026-02-03 05:47:20.472205 | orchestrator | changed: [testbed-node-3] => { 2026-02-03 05:47:20.472216 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:47:20.472229 | orchestrator | } 2026-02-03 05:47:20.472241 | orchestrator | changed: [testbed-node-4] => { 2026-02-03 05:47:20.472254 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:47:20.472266 | orchestrator | } 2026-02-03 05:47:20.472279 | orchestrator | changed: [testbed-node-5] => { 2026-02-03 05:47:20.472292 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:47:20.472304 | orchestrator | } 2026-02-03 05:47:20.472317 | orchestrator | 2026-02-03 05:47:20.472330 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:47:20.472343 | orchestrator | Tuesday 03 February 2026 05:47:19 +0000 (0:00:02.167) 0:00:39.776 ****** 2026-02-03 05:47:20.472356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-03 05:47:20.472370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-03 05:47:20.472392 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:47:20.472405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-03 05:47:20.472419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-03 05:47:20.472440 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:48:08.287381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-03 05:48:08.287468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-03 05:48:08.287475 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:48:08.287480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-03 05:48:08.287485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-03 05:48:08.287506 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:48:08.287510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-03 05:48:08.287529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-03 05:48:08.287534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-03 05:48:08.287538 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:48:08.287542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-03 05:48:08.287546 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:48:08.287550 | orchestrator | 2026-02-03 05:48:08.287555 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 05:48:08.287560 | orchestrator | Tuesday 03 February 2026 05:47:22 +0000 (0:00:02.939) 0:00:42.715 ****** 2026-02-03 05:48:08.287570 | orchestrator | 2026-02-03 05:48:08.287576 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 05:48:08.287583 | orchestrator | Tuesday 03 February 2026 05:47:23 +0000 (0:00:00.601) 0:00:43.317 ****** 2026-02-03 05:48:08.287589 | orchestrator | 2026-02-03 05:48:08.287595 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 05:48:08.287600 | orchestrator | Tuesday 03 February 2026 05:47:23 +0000 (0:00:00.539) 0:00:43.856 ****** 2026-02-03 05:48:08.287605 | orchestrator | 2026-02-03 05:48:08.287611 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 05:48:08.287616 | orchestrator | Tuesday 03 February 2026 05:47:24 +0000 (0:00:00.585) 0:00:44.442 ****** 2026-02-03 05:48:08.287623 | orchestrator | 2026-02-03 05:48:08.287629 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 05:48:08.287634 | orchestrator | Tuesday 03 February 2026 05:47:25 +0000 (0:00:00.820) 0:00:45.262 ****** 2026-02-03 05:48:08.287639 | orchestrator | 2026-02-03 05:48:08.287645 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-03 05:48:08.287651 | orchestrator | Tuesday 03 February 2026 05:47:25 +0000 (0:00:00.563) 0:00:45.825 ****** 2026-02-03 05:48:08.287657 | orchestrator | 2026-02-03 05:48:08.287663 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-03 05:48:08.287669 | orchestrator | Tuesday 03 February 2026 05:47:26 +0000 (0:00:00.945) 0:00:46.771 ****** 2026-02-03 05:48:08.287675 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:48:08.287681 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:48:08.287687 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:48:08.287693 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:48:08.287698 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:48:08.287705 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:48:08.287712 | orchestrator | 2026-02-03 05:48:08.287718 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-03 05:48:08.287725 | orchestrator | Tuesday 03 February 2026 05:47:46 +0000 (0:00:20.048) 0:01:06.819 ****** 2026-02-03 05:48:08.287731 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:48:08.287736 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:48:08.287740 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:48:08.287743 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:48:08.287747 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:48:08.287751 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:48:08.287755 | orchestrator | 2026-02-03 05:48:08.287758 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-03 05:48:08.287762 | orchestrator | Tuesday 03 February 2026 05:47:49 +0000 (0:00:02.457) 0:01:09.276 ****** 2026-02-03 05:48:08.287766 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:48:08.287770 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:48:08.287773 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:48:08.287777 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:48:08.287781 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:48:08.287785 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:48:08.287789 | orchestrator | 2026-02-03 05:48:08.287793 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-03 05:48:08.287812 | orchestrator | Tuesday 03 February 2026 05:48:08 +0000 (0:00:18.895) 0:01:28.172 ****** 2026-02-03 05:48:31.045538 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-03 05:48:31.045652 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-03 05:48:31.045685 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-03 05:48:31.045698 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-03 05:48:31.045709 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-03 05:48:31.045749 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-03 05:48:31.045761 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-03 05:48:31.045771 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-03 05:48:31.045782 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-03 05:48:31.045793 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-03 05:48:31.045804 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 05:48:31.045816 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 05:48:31.045826 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 05:48:31.045837 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 05:48:31.045848 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 05:48:31.045859 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-03 05:48:31.045870 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-03 05:48:31.045881 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-03 05:48:31.045892 | orchestrator | 2026-02-03 05:48:31.045904 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-03 05:48:31.045916 | orchestrator | Tuesday 03 February 2026 05:48:21 +0000 (0:00:13.545) 0:01:41.718 ****** 2026-02-03 05:48:31.045928 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-03 05:48:31.045940 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:48:31.045952 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-03 05:48:31.045963 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:48:31.045974 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-03 05:48:31.045985 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:48:31.045996 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-03 05:48:31.046007 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-03 05:48:31.046120 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-03 05:48:31.046136 | orchestrator | 2026-02-03 05:48:31.046150 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-03 05:48:31.046163 | orchestrator | Tuesday 03 February 2026 05:48:25 +0000 (0:00:03.589) 0:01:45.308 ****** 2026-02-03 05:48:31.046177 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-03 05:48:31.046189 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:48:31.046203 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-03 05:48:31.046216 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:48:31.046229 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-03 05:48:31.046242 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:48:31.046255 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-03 05:48:31.046268 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-03 05:48:31.046281 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-03 05:48:31.046301 | orchestrator | 2026-02-03 05:48:31.046322 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:48:31.046358 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 05:48:31.046380 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 05:48:31.046400 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-03 05:48:31.046422 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:48:31.046465 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:48:31.046494 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-03 05:48:31.046514 | orchestrator | 2026-02-03 05:48:31.046535 | orchestrator | 2026-02-03 05:48:31.046557 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:48:31.046576 | orchestrator | Tuesday 03 February 2026 05:48:30 +0000 (0:00:05.137) 0:01:50.445 ****** 2026-02-03 05:48:31.046593 | orchestrator | =============================================================================== 2026-02-03 05:48:31.046604 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 20.05s 2026-02-03 05:48:31.046615 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.90s 2026-02-03 05:48:31.046626 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 13.55s 2026-02-03 05:48:31.046637 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.14s 2026-02-03 05:48:31.046647 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.89s 2026-02-03 05:48:31.046658 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 4.65s 2026-02-03 05:48:31.046669 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 4.06s 2026-02-03 05:48:31.046680 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.59s 2026-02-03 05:48:31.046691 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.29s 2026-02-03 05:48:31.046701 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.11s 2026-02-03 05:48:31.046712 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.03s 2026-02-03 05:48:31.046723 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.02s 2026-02-03 05:48:31.046734 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.97s 2026-02-03 05:48:31.046744 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.94s 2026-02-03 05:48:31.046755 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.88s 2026-02-03 05:48:31.046766 | orchestrator | module-load : Load modules ---------------------------------------------- 2.77s 2026-02-03 05:48:31.046777 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.65s 2026-02-03 05:48:31.046788 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.46s 2026-02-03 05:48:31.046799 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.17s 2026-02-03 05:48:31.046809 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.08s 2026-02-03 05:48:31.473671 | orchestrator | + osism apply -a upgrade ovn 2026-02-03 05:48:33.716243 | orchestrator | 2026-02-03 05:48:33 | INFO  | Task 4932c061-c889-4743-8c34-091b011349ef (ovn) was prepared for execution. 2026-02-03 05:48:33.716342 | orchestrator | 2026-02-03 05:48:33 | INFO  | It takes a moment until task 4932c061-c889-4743-8c34-091b011349ef (ovn) has been started and output is visible here. 2026-02-03 05:48:59.250718 | orchestrator | 2026-02-03 05:48:59.250836 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-03 05:48:59.250856 | orchestrator | 2026-02-03 05:48:59.250869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-03 05:48:59.250881 | orchestrator | Tuesday 03 February 2026 05:48:39 +0000 (0:00:01.496) 0:00:01.496 ****** 2026-02-03 05:48:59.250892 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:48:59.250904 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:48:59.250915 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:48:59.250926 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:48:59.250937 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:48:59.250948 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:48:59.250959 | orchestrator | 2026-02-03 05:48:59.250970 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-03 05:48:59.250981 | orchestrator | Tuesday 03 February 2026 05:48:44 +0000 (0:00:04.242) 0:00:05.739 ****** 2026-02-03 05:48:59.250992 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-03 05:48:59.251004 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-03 05:48:59.251015 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-03 05:48:59.251026 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-03 05:48:59.251081 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-03 05:48:59.251095 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-03 05:48:59.251106 | orchestrator | 2026-02-03 05:48:59.251117 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-03 05:48:59.251128 | orchestrator | 2026-02-03 05:48:59.251139 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-03 05:48:59.251150 | orchestrator | Tuesday 03 February 2026 05:48:48 +0000 (0:00:04.146) 0:00:09.885 ****** 2026-02-03 05:48:59.251162 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 05:48:59.251174 | orchestrator | 2026-02-03 05:48:59.251185 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-03 05:48:59.251196 | orchestrator | Tuesday 03 February 2026 05:48:50 +0000 (0:00:02.677) 0:00:12.562 ****** 2026-02-03 05:48:59.251226 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251241 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251256 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251270 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251307 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251340 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251357 | orchestrator | 2026-02-03 05:48:59.251378 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-03 05:48:59.251398 | orchestrator | Tuesday 03 February 2026 05:48:53 +0000 (0:00:02.767) 0:00:15.330 ****** 2026-02-03 05:48:59.251419 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251439 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251460 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251489 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251509 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251531 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251566 | orchestrator | 2026-02-03 05:48:59.251582 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-03 05:48:59.251593 | orchestrator | Tuesday 03 February 2026 05:48:56 +0000 (0:00:03.079) 0:00:18.409 ****** 2026-02-03 05:48:59.251604 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251615 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:48:59.251672 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.248831 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.248944 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.248960 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.248973 | orchestrator | 2026-02-03 05:49:07.248987 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-03 05:49:07.249017 | orchestrator | Tuesday 03 February 2026 05:48:59 +0000 (0:00:02.473) 0:00:20.883 ****** 2026-02-03 05:49:07.249029 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249107 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249148 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249160 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249171 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249201 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249213 | orchestrator | 2026-02-03 05:49:07.249224 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-03 05:49:07.249236 | orchestrator | Tuesday 03 February 2026 05:49:02 +0000 (0:00:03.064) 0:00:23.948 ****** 2026-02-03 05:49:07.249248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249278 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:49:07.249335 | orchestrator | 2026-02-03 05:49:07.249346 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-03 05:49:07.249360 | orchestrator | Tuesday 03 February 2026 05:49:04 +0000 (0:00:02.627) 0:00:26.575 ****** 2026-02-03 05:49:07.249374 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:49:07.249387 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:49:07.249400 | orchestrator | } 2026-02-03 05:49:07.249413 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:49:07.249425 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:49:07.249438 | orchestrator | } 2026-02-03 05:49:07.249451 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:49:07.249463 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:49:07.249476 | orchestrator | } 2026-02-03 05:49:07.249489 | orchestrator | changed: [testbed-node-3] => { 2026-02-03 05:49:07.249501 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:49:07.249514 | orchestrator | } 2026-02-03 05:49:07.249527 | orchestrator | changed: [testbed-node-4] => { 2026-02-03 05:49:07.249540 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:49:07.249553 | orchestrator | } 2026-02-03 05:49:07.249565 | orchestrator | changed: [testbed-node-5] => { 2026-02-03 05:49:07.249578 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:49:07.249591 | orchestrator | } 2026-02-03 05:49:07.249604 | orchestrator | 2026-02-03 05:49:07.249616 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:49:07.249630 | orchestrator | Tuesday 03 February 2026 05:49:07 +0000 (0:00:02.170) 0:00:28.746 ****** 2026-02-03 05:49:07.249651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:49:39.942856 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:49:39.942986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:49:39.943005 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:49:39.943017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:49:39.943098 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:49:39.943126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:49:39.943137 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:49:39.943147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:49:39.943157 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:49:39.943167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:49:39.943177 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:49:39.943187 | orchestrator | 2026-02-03 05:49:39.943198 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-03 05:49:39.943209 | orchestrator | Tuesday 03 February 2026 05:49:09 +0000 (0:00:02.675) 0:00:31.422 ****** 2026-02-03 05:49:39.943218 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:49:39.943229 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:49:39.943239 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:49:39.943248 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:49:39.943258 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:49:39.943267 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:49:39.943276 | orchestrator | 2026-02-03 05:49:39.943286 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-03 05:49:39.943296 | orchestrator | Tuesday 03 February 2026 05:49:13 +0000 (0:00:03.803) 0:00:35.225 ****** 2026-02-03 05:49:39.943306 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-03 05:49:39.943316 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-03 05:49:39.943328 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-03 05:49:39.943339 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-03 05:49:39.943351 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-03 05:49:39.943362 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-03 05:49:39.943373 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 05:49:39.943384 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 05:49:39.943395 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 05:49:39.943413 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 05:49:39.943425 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 05:49:39.943454 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-03 05:49:39.943465 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-03 05:49:39.943478 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-03 05:49:39.943489 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-03 05:49:39.943501 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-03 05:49:39.943512 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-03 05:49:39.943523 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-03 05:49:39.943535 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 05:49:39.943546 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 05:49:39.943557 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 05:49:39.943574 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 05:49:39.943586 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 05:49:39.943597 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-03 05:49:39.943608 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 05:49:39.943619 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 05:49:39.943630 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 05:49:39.943641 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 05:49:39.943652 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 05:49:39.943664 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-03 05:49:39.943675 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 05:49:39.943686 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 05:49:39.943696 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 05:49:39.943705 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 05:49:39.943715 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 05:49:39.943724 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-03 05:49:39.943734 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-03 05:49:39.943744 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-03 05:49:39.943753 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-03 05:49:39.943769 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-03 05:49:39.943778 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-03 05:49:39.943788 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-03 05:49:39.943798 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-03 05:49:39.943814 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-03 05:49:39.943823 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-03 05:49:39.943833 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-03 05:49:39.943842 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-03 05:49:39.943858 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-03 05:52:32.639842 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-03 05:52:32.639958 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-03 05:52:32.639973 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-03 05:52:32.639983 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-03 05:52:32.640023 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-03 05:52:32.640032 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-03 05:52:32.640041 | orchestrator | 2026-02-03 05:52:32.640051 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 05:52:32.640060 | orchestrator | Tuesday 03 February 2026 05:49:36 +0000 (0:00:23.019) 0:00:58.245 ****** 2026-02-03 05:52:32.640068 | orchestrator | 2026-02-03 05:52:32.640076 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 05:52:32.640085 | orchestrator | Tuesday 03 February 2026 05:49:37 +0000 (0:00:00.456) 0:00:58.701 ****** 2026-02-03 05:52:32.640093 | orchestrator | 2026-02-03 05:52:32.640101 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 05:52:32.640163 | orchestrator | Tuesday 03 February 2026 05:49:37 +0000 (0:00:00.488) 0:00:59.190 ****** 2026-02-03 05:52:32.640173 | orchestrator | 2026-02-03 05:52:32.640181 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 05:52:32.640189 | orchestrator | Tuesday 03 February 2026 05:49:38 +0000 (0:00:00.543) 0:00:59.733 ****** 2026-02-03 05:52:32.640197 | orchestrator | 2026-02-03 05:52:32.640205 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 05:52:32.640213 | orchestrator | Tuesday 03 February 2026 05:49:38 +0000 (0:00:00.475) 0:01:00.209 ****** 2026-02-03 05:52:32.640221 | orchestrator | 2026-02-03 05:52:32.640229 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-03 05:52:32.640237 | orchestrator | Tuesday 03 February 2026 05:49:39 +0000 (0:00:00.481) 0:01:00.691 ****** 2026-02-03 05:52:32.640245 | orchestrator | 2026-02-03 05:52:32.640253 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-03 05:52:32.640261 | orchestrator | Tuesday 03 February 2026 05:49:39 +0000 (0:00:00.847) 0:01:01.539 ****** 2026-02-03 05:52:32.640289 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:52:32.640299 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:52:32.640307 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:52:32.640314 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:52:32.640322 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:52:32.640330 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:52:32.640338 | orchestrator | 2026-02-03 05:52:32.640346 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-03 05:52:32.640354 | orchestrator | 2026-02-03 05:52:32.640361 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-03 05:52:32.640369 | orchestrator | Tuesday 03 February 2026 05:51:52 +0000 (0:02:12.336) 0:03:13.875 ****** 2026-02-03 05:52:32.640378 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:52:32.640386 | orchestrator | 2026-02-03 05:52:32.640394 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-03 05:52:32.640402 | orchestrator | Tuesday 03 February 2026 05:51:54 +0000 (0:00:02.155) 0:03:16.031 ****** 2026-02-03 05:52:32.640409 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-03 05:52:32.640417 | orchestrator | 2026-02-03 05:52:32.640425 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-03 05:52:32.640433 | orchestrator | Tuesday 03 February 2026 05:51:56 +0000 (0:00:02.212) 0:03:18.244 ****** 2026-02-03 05:52:32.640441 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.640450 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.640458 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.640465 | orchestrator | 2026-02-03 05:52:32.640473 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-03 05:52:32.640481 | orchestrator | Tuesday 03 February 2026 05:51:58 +0000 (0:00:02.104) 0:03:20.348 ****** 2026-02-03 05:52:32.640489 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.640497 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.640504 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.640512 | orchestrator | 2026-02-03 05:52:32.640520 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-03 05:52:32.640528 | orchestrator | Tuesday 03 February 2026 05:52:00 +0000 (0:00:01.596) 0:03:21.944 ****** 2026-02-03 05:52:32.640536 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.640544 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.640552 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.640559 | orchestrator | 2026-02-03 05:52:32.640567 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-03 05:52:32.640575 | orchestrator | Tuesday 03 February 2026 05:52:01 +0000 (0:00:01.538) 0:03:23.483 ****** 2026-02-03 05:52:32.640583 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.640591 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.640598 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.640606 | orchestrator | 2026-02-03 05:52:32.640614 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-03 05:52:32.640622 | orchestrator | Tuesday 03 February 2026 05:52:03 +0000 (0:00:01.785) 0:03:25.269 ****** 2026-02-03 05:52:32.640629 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.640637 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.640645 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.640653 | orchestrator | 2026-02-03 05:52:32.640676 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-03 05:52:32.640685 | orchestrator | Tuesday 03 February 2026 05:52:05 +0000 (0:00:01.487) 0:03:26.756 ****** 2026-02-03 05:52:32.640693 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:52:32.640701 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:52:32.640709 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:52:32.640717 | orchestrator | 2026-02-03 05:52:32.640725 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-03 05:52:32.640740 | orchestrator | Tuesday 03 February 2026 05:52:06 +0000 (0:00:01.488) 0:03:28.244 ****** 2026-02-03 05:52:32.640749 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.640757 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.640765 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.640772 | orchestrator | 2026-02-03 05:52:32.640780 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-03 05:52:32.640789 | orchestrator | Tuesday 03 February 2026 05:52:08 +0000 (0:00:01.968) 0:03:30.213 ****** 2026-02-03 05:52:32.640797 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.640805 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.640812 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.640820 | orchestrator | 2026-02-03 05:52:32.640828 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-03 05:52:32.640836 | orchestrator | Tuesday 03 February 2026 05:52:10 +0000 (0:00:01.949) 0:03:32.163 ****** 2026-02-03 05:52:32.640844 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.640852 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.640860 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.640868 | orchestrator | 2026-02-03 05:52:32.640876 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-03 05:52:32.640884 | orchestrator | Tuesday 03 February 2026 05:52:12 +0000 (0:00:02.138) 0:03:34.302 ****** 2026-02-03 05:52:32.640896 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.640904 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.640912 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.640920 | orchestrator | 2026-02-03 05:52:32.640928 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-03 05:52:32.640936 | orchestrator | Tuesday 03 February 2026 05:52:14 +0000 (0:00:01.549) 0:03:35.851 ****** 2026-02-03 05:52:32.640944 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:52:32.640952 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:52:32.640960 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:52:32.640968 | orchestrator | 2026-02-03 05:52:32.640976 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-03 05:52:32.640984 | orchestrator | Tuesday 03 February 2026 05:52:15 +0000 (0:00:01.751) 0:03:37.602 ****** 2026-02-03 05:52:32.640992 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:52:32.641000 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:52:32.641007 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:52:32.641015 | orchestrator | 2026-02-03 05:52:32.641023 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-03 05:52:32.641031 | orchestrator | Tuesday 03 February 2026 05:52:17 +0000 (0:00:01.470) 0:03:39.072 ****** 2026-02-03 05:52:32.641039 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.641047 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.641055 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.641063 | orchestrator | 2026-02-03 05:52:32.641071 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-03 05:52:32.641079 | orchestrator | Tuesday 03 February 2026 05:52:19 +0000 (0:00:01.882) 0:03:40.955 ****** 2026-02-03 05:52:32.641087 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.641095 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.641103 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.641133 | orchestrator | 2026-02-03 05:52:32.641142 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-03 05:52:32.641150 | orchestrator | Tuesday 03 February 2026 05:52:20 +0000 (0:00:01.523) 0:03:42.478 ****** 2026-02-03 05:52:32.641158 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.641166 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.641173 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.641181 | orchestrator | 2026-02-03 05:52:32.641190 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-03 05:52:32.641202 | orchestrator | Tuesday 03 February 2026 05:52:22 +0000 (0:00:02.161) 0:03:44.640 ****** 2026-02-03 05:52:32.641215 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:52:32.641235 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:52:32.641249 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:52:32.641261 | orchestrator | 2026-02-03 05:52:32.641272 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-03 05:52:32.641280 | orchestrator | Tuesday 03 February 2026 05:52:24 +0000 (0:00:01.565) 0:03:46.206 ****** 2026-02-03 05:52:32.641288 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:52:32.641296 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:52:32.641303 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:52:32.641311 | orchestrator | 2026-02-03 05:52:32.641319 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-03 05:52:32.641327 | orchestrator | Tuesday 03 February 2026 05:52:26 +0000 (0:00:01.640) 0:03:47.846 ****** 2026-02-03 05:52:32.641335 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:52:32.641342 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:52:32.641350 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:52:32.641358 | orchestrator | 2026-02-03 05:52:32.641366 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-03 05:52:32.641374 | orchestrator | Tuesday 03 February 2026 05:52:27 +0000 (0:00:01.788) 0:03:49.635 ****** 2026-02-03 05:52:32.641392 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.599808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.599891 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.599914 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.599920 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.599939 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.599945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:52:39.599951 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.599969 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.599975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:52:39.599983 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.599988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:52:39.599994 | orchestrator | 2026-02-03 05:52:39.600000 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-03 05:52:39.600007 | orchestrator | Tuesday 03 February 2026 05:52:32 +0000 (0:00:04.633) 0:03:54.268 ****** 2026-02-03 05:52:39.600016 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.600022 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.600027 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.600033 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:39.600041 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.436544 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.436676 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.436695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:52:55.436733 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.436746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:52:55.436758 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.436769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:52:55.436781 | orchestrator | 2026-02-03 05:52:55.436795 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-03 05:52:55.436808 | orchestrator | Tuesday 03 February 2026 05:52:39 +0000 (0:00:06.963) 0:04:01.232 ****** 2026-02-03 05:52:55.436820 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-03 05:52:55.436831 | orchestrator | 2026-02-03 05:52:55.436842 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-03 05:52:55.436853 | orchestrator | Tuesday 03 February 2026 05:52:41 +0000 (0:00:02.095) 0:04:03.327 ****** 2026-02-03 05:52:55.436865 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:52:55.436876 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:52:55.436902 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:52:55.436914 | orchestrator | 2026-02-03 05:52:55.436925 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-03 05:52:55.436936 | orchestrator | Tuesday 03 February 2026 05:52:43 +0000 (0:00:01.901) 0:04:05.228 ****** 2026-02-03 05:52:55.436947 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:52:55.436958 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:52:55.436968 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:52:55.436979 | orchestrator | 2026-02-03 05:52:55.436990 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-03 05:52:55.437000 | orchestrator | Tuesday 03 February 2026 05:52:46 +0000 (0:00:02.822) 0:04:08.051 ****** 2026-02-03 05:52:55.437018 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:52:55.437029 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:52:55.437040 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:52:55.437051 | orchestrator | 2026-02-03 05:52:55.437067 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-03 05:52:55.437080 | orchestrator | Tuesday 03 February 2026 05:52:49 +0000 (0:00:03.038) 0:04:11.090 ****** 2026-02-03 05:52:55.437094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.437108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.437151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.437164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.437178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.437190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:52:55.437211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:53:00.250659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.250767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:53:00.250785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.250798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:53:00.250809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.250821 | orchestrator | 2026-02-03 05:53:00.250835 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-03 05:53:00.250847 | orchestrator | Tuesday 03 February 2026 05:52:55 +0000 (0:00:05.968) 0:04:17.059 ****** 2026-02-03 05:53:00.250860 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:53:00.250872 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:53:00.250884 | orchestrator | } 2026-02-03 05:53:00.250895 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:53:00.250906 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:53:00.250917 | orchestrator | } 2026-02-03 05:53:00.250928 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:53:00.250939 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:53:00.250950 | orchestrator | } 2026-02-03 05:53:00.250961 | orchestrator | 2026-02-03 05:53:00.250972 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-03 05:53:00.251004 | orchestrator | Tuesday 03 February 2026 05:52:56 +0000 (0:00:01.442) 0:04:18.502 ****** 2026-02-03 05:53:00.251017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.251067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.251093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.251108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.251183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.251199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.251212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.251235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.251249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-03 05:53:00.251274 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-03 05:54:36.035649 | orchestrator | 2026-02-03 05:54:36.035749 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-03 05:54:36.035759 | orchestrator | Tuesday 03 February 2026 05:53:00 +0000 (0:00:03.376) 0:04:21.878 ****** 2026-02-03 05:54:36.035766 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-03 05:54:36.035771 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-03 05:54:36.035776 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-03 05:54:36.035781 | orchestrator | 2026-02-03 05:54:36.035787 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-03 05:54:36.035792 | orchestrator | Tuesday 03 February 2026 05:53:02 +0000 (0:00:02.357) 0:04:24.235 ****** 2026-02-03 05:54:36.035797 | orchestrator | changed: [testbed-node-0] => { 2026-02-03 05:54:36.035803 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:54:36.035808 | orchestrator | } 2026-02-03 05:54:36.035814 | orchestrator | changed: [testbed-node-1] => { 2026-02-03 05:54:36.035818 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:54:36.035823 | orchestrator | } 2026-02-03 05:54:36.035827 | orchestrator | changed: [testbed-node-2] => { 2026-02-03 05:54:36.035832 | orchestrator |  "msg": "Notifying handlers" 2026-02-03 05:54:36.035836 | orchestrator | } 2026-02-03 05:54:36.035841 | orchestrator | 2026-02-03 05:54:36.035846 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-03 05:54:36.035850 | orchestrator | Tuesday 03 February 2026 05:53:04 +0000 (0:00:01.574) 0:04:25.810 ****** 2026-02-03 05:54:36.035855 | orchestrator | 2026-02-03 05:54:36.035860 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-03 05:54:36.035864 | orchestrator | Tuesday 03 February 2026 05:53:04 +0000 (0:00:00.465) 0:04:26.276 ****** 2026-02-03 05:54:36.035869 | orchestrator | 2026-02-03 05:54:36.035873 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-03 05:54:36.035878 | orchestrator | Tuesday 03 February 2026 05:53:05 +0000 (0:00:00.493) 0:04:26.769 ****** 2026-02-03 05:54:36.035883 | orchestrator | 2026-02-03 05:54:36.035887 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-03 05:54:36.035892 | orchestrator | Tuesday 03 February 2026 05:53:06 +0000 (0:00:01.115) 0:04:27.884 ****** 2026-02-03 05:54:36.035913 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:54:36.035918 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:54:36.035923 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:54:36.035928 | orchestrator | 2026-02-03 05:54:36.035932 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-03 05:54:36.035937 | orchestrator | Tuesday 03 February 2026 05:53:23 +0000 (0:00:17.702) 0:04:45.587 ****** 2026-02-03 05:54:36.035941 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:54:36.035946 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:54:36.035950 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:54:36.035955 | orchestrator | 2026-02-03 05:54:36.035959 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-03 05:54:36.035964 | orchestrator | Tuesday 03 February 2026 05:53:42 +0000 (0:00:18.261) 0:05:03.849 ****** 2026-02-03 05:54:36.035969 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-03 05:54:36.035973 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-03 05:54:36.035978 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-03 05:54:36.035982 | orchestrator | 2026-02-03 05:54:36.035987 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-03 05:54:36.035992 | orchestrator | Tuesday 03 February 2026 05:53:55 +0000 (0:00:12.933) 0:05:16.783 ****** 2026-02-03 05:54:36.036000 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:54:36.036008 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:54:36.036016 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:54:36.036023 | orchestrator | 2026-02-03 05:54:36.036031 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-03 05:54:36.036038 | orchestrator | Tuesday 03 February 2026 05:54:14 +0000 (0:00:18.983) 0:05:35.766 ****** 2026-02-03 05:54:36.036045 | orchestrator | Pausing for 5 seconds 2026-02-03 05:54:36.036053 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:54:36.036060 | orchestrator | 2026-02-03 05:54:36.036162 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-03 05:54:36.036179 | orchestrator | Tuesday 03 February 2026 05:54:20 +0000 (0:00:06.289) 0:05:42.056 ****** 2026-02-03 05:54:36.036186 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:54:36.036194 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:54:36.036202 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:54:36.036210 | orchestrator | 2026-02-03 05:54:36.036218 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-03 05:54:36.036227 | orchestrator | Tuesday 03 February 2026 05:54:22 +0000 (0:00:01.995) 0:05:44.052 ****** 2026-02-03 05:54:36.036233 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:54:36.036238 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:54:36.036244 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:54:36.036249 | orchestrator | 2026-02-03 05:54:36.036254 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-03 05:54:36.036260 | orchestrator | Tuesday 03 February 2026 05:54:24 +0000 (0:00:01.709) 0:05:45.761 ****** 2026-02-03 05:54:36.036268 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:54:36.036276 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:54:36.036283 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:54:36.036291 | orchestrator | 2026-02-03 05:54:36.036299 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-03 05:54:36.036307 | orchestrator | Tuesday 03 February 2026 05:54:26 +0000 (0:00:01.986) 0:05:47.748 ****** 2026-02-03 05:54:36.036315 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:54:36.036323 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:54:36.036332 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:54:36.036337 | orchestrator | 2026-02-03 05:54:36.036343 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-03 05:54:36.036351 | orchestrator | Tuesday 03 February 2026 05:54:28 +0000 (0:00:02.263) 0:05:50.012 ****** 2026-02-03 05:54:36.036358 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:54:36.036366 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:54:36.036381 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:54:36.036390 | orchestrator | 2026-02-03 05:54:36.036398 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-03 05:54:36.036421 | orchestrator | Tuesday 03 February 2026 05:54:30 +0000 (0:00:01.977) 0:05:51.989 ****** 2026-02-03 05:54:36.036429 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:54:36.036437 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:54:36.036445 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:54:36.036452 | orchestrator | 2026-02-03 05:54:36.036460 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-03 05:54:36.036467 | orchestrator | Tuesday 03 February 2026 05:54:32 +0000 (0:00:01.908) 0:05:53.898 ****** 2026-02-03 05:54:36.036475 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-03 05:54:36.036483 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-03 05:54:36.036491 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-03 05:54:36.036499 | orchestrator | 2026-02-03 05:54:36.036507 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 05:54:36.036516 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 05:54:36.036524 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-03 05:54:36.036532 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-03 05:54:36.036540 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 05:54:36.036548 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 05:54:36.036555 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 05:54:36.036563 | orchestrator | 2026-02-03 05:54:36.036570 | orchestrator | 2026-02-03 05:54:36.036577 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 05:54:36.036586 | orchestrator | Tuesday 03 February 2026 05:54:35 +0000 (0:00:03.296) 0:05:57.194 ****** 2026-02-03 05:54:36.036593 | orchestrator | =============================================================================== 2026-02-03 05:54:36.036601 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 132.34s 2026-02-03 05:54:36.036608 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.02s 2026-02-03 05:54:36.036616 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 18.98s 2026-02-03 05:54:36.036624 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 18.26s 2026-02-03 05:54:36.036631 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 17.70s 2026-02-03 05:54:36.036639 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 12.93s 2026-02-03 05:54:36.036646 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.96s 2026-02-03 05:54:36.036654 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.29s 2026-02-03 05:54:36.036661 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.97s 2026-02-03 05:54:36.036668 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.63s 2026-02-03 05:54:36.036676 | orchestrator | Group hosts based on Kolla action --------------------------------------- 4.24s 2026-02-03 05:54:36.036684 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.15s 2026-02-03 05:54:36.036691 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.80s 2026-02-03 05:54:36.036699 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.38s 2026-02-03 05:54:36.036713 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.30s 2026-02-03 05:54:36.036721 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.29s 2026-02-03 05:54:36.036728 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.08s 2026-02-03 05:54:36.036736 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.06s 2026-02-03 05:54:36.036743 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 3.04s 2026-02-03 05:54:36.036751 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.82s 2026-02-03 05:54:36.440764 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-03 05:54:36.440849 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-03 05:54:36.440859 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-03 05:54:36.452184 | orchestrator | + set -e 2026-02-03 05:54:36.452250 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 05:54:36.452257 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 05:54:36.452264 | orchestrator | ++ INTERACTIVE=false 2026-02-03 05:54:36.452269 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 05:54:36.452274 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 05:54:36.452279 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-03 05:54:38.693697 | orchestrator | 2026-02-03 05:54:38 | INFO  | Task a4a980ae-fdee-4973-ac6e-f2bac376d3b7 (ceph-rolling_update) was prepared for execution. 2026-02-03 05:54:38.693798 | orchestrator | 2026-02-03 05:54:38 | INFO  | It takes a moment until task a4a980ae-fdee-4973-ac6e-f2bac376d3b7 (ceph-rolling_update) has been started and output is visible here. 2026-02-03 05:56:10.241504 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-03 05:56:10.241600 | orchestrator | 2.16.14 2026-02-03 05:56:10.241611 | orchestrator | 2026-02-03 05:56:10.241618 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-03 05:56:10.241625 | orchestrator | 2026-02-03 05:56:10.241632 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-03 05:56:10.241639 | orchestrator | Tuesday 03 February 2026 05:54:48 +0000 (0:00:01.773) 0:00:01.773 ****** 2026-02-03 05:56:10.241645 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-03 05:56:10.241652 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-03 05:56:10.241659 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-03 05:56:10.241665 | orchestrator | skipping: [localhost] 2026-02-03 05:56:10.241671 | orchestrator | 2026-02-03 05:56:10.241678 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-03 05:56:10.241685 | orchestrator | 2026-02-03 05:56:10.241691 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-03 05:56:10.241697 | orchestrator | Tuesday 03 February 2026 05:54:51 +0000 (0:00:02.615) 0:00:04.389 ****** 2026-02-03 05:56:10.241703 | orchestrator | ok: [testbed-node-0] => { 2026-02-03 05:56:10.241709 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-03 05:56:10.241716 | orchestrator | } 2026-02-03 05:56:10.241722 | orchestrator | ok: [testbed-node-1] => { 2026-02-03 05:56:10.241728 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-03 05:56:10.241733 | orchestrator | } 2026-02-03 05:56:10.241739 | orchestrator | ok: [testbed-node-2] => { 2026-02-03 05:56:10.241746 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-03 05:56:10.241752 | orchestrator | } 2026-02-03 05:56:10.241758 | orchestrator | ok: [testbed-node-3] => { 2026-02-03 05:56:10.241763 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-03 05:56:10.241769 | orchestrator | } 2026-02-03 05:56:10.241775 | orchestrator | ok: [testbed-node-4] => { 2026-02-03 05:56:10.241782 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-03 05:56:10.241808 | orchestrator | } 2026-02-03 05:56:10.241814 | orchestrator | ok: [testbed-node-5] => { 2026-02-03 05:56:10.241820 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-03 05:56:10.241826 | orchestrator | } 2026-02-03 05:56:10.241831 | orchestrator | ok: [testbed-manager] => { 2026-02-03 05:56:10.241837 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-03 05:56:10.241843 | orchestrator | } 2026-02-03 05:56:10.241849 | orchestrator | 2026-02-03 05:56:10.241855 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-03 05:56:10.241861 | orchestrator | Tuesday 03 February 2026 05:54:57 +0000 (0:00:05.961) 0:00:10.351 ****** 2026-02-03 05:56:10.241867 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:10.241872 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:10.241878 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:10.241884 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:10.241889 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:10.241895 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:10.241901 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.241907 | orchestrator | 2026-02-03 05:56:10.241913 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-03 05:56:10.241919 | orchestrator | Tuesday 03 February 2026 05:55:04 +0000 (0:00:07.417) 0:00:17.768 ****** 2026-02-03 05:56:10.241924 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 05:56:10.241930 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 05:56:10.241936 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 05:56:10.241942 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 05:56:10.241947 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 05:56:10.241953 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 05:56:10.241959 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 05:56:10.241965 | orchestrator | 2026-02-03 05:56:10.241972 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-03 05:56:10.241978 | orchestrator | Tuesday 03 February 2026 05:55:36 +0000 (0:00:32.302) 0:00:50.071 ****** 2026-02-03 05:56:10.241984 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:10.241990 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:10.241996 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:10.242001 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:10.242007 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:10.242065 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:10.242074 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.242080 | orchestrator | 2026-02-03 05:56:10.242087 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 05:56:10.242094 | orchestrator | Tuesday 03 February 2026 05:55:39 +0000 (0:00:02.183) 0:00:52.254 ****** 2026-02-03 05:56:10.242100 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-03 05:56:10.242108 | orchestrator | 2026-02-03 05:56:10.242115 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 05:56:10.242122 | orchestrator | Tuesday 03 February 2026 05:55:41 +0000 (0:00:02.907) 0:00:55.162 ****** 2026-02-03 05:56:10.242142 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:10.242149 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:10.242155 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:10.242182 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:10.242205 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:10.242212 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:10.242218 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.242236 | orchestrator | 2026-02-03 05:56:10.242258 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 05:56:10.242266 | orchestrator | Tuesday 03 February 2026 05:55:44 +0000 (0:00:02.927) 0:00:58.089 ****** 2026-02-03 05:56:10.242273 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:10.242280 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:10.242286 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:10.242293 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:10.242300 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:10.242307 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:10.242313 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.242320 | orchestrator | 2026-02-03 05:56:10.242327 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 05:56:10.242334 | orchestrator | Tuesday 03 February 2026 05:55:47 +0000 (0:00:02.374) 0:01:00.464 ****** 2026-02-03 05:56:10.242341 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:10.242348 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:10.242354 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:10.242361 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:10.242368 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:10.242375 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:10.242381 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.242388 | orchestrator | 2026-02-03 05:56:10.242395 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 05:56:10.242402 | orchestrator | Tuesday 03 February 2026 05:55:49 +0000 (0:00:02.665) 0:01:03.129 ****** 2026-02-03 05:56:10.242409 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:10.242415 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:10.242422 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:10.242428 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:10.242434 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:10.242440 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:10.242445 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.242451 | orchestrator | 2026-02-03 05:56:10.242457 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 05:56:10.242463 | orchestrator | Tuesday 03 February 2026 05:55:52 +0000 (0:00:02.073) 0:01:05.203 ****** 2026-02-03 05:56:10.242469 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:10.242475 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:10.242481 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:10.242487 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:10.242492 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:10.242498 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:10.242504 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.242510 | orchestrator | 2026-02-03 05:56:10.242517 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 05:56:10.242524 | orchestrator | Tuesday 03 February 2026 05:55:54 +0000 (0:00:02.326) 0:01:07.529 ****** 2026-02-03 05:56:10.242530 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:10.242536 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:10.242542 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:10.242548 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:10.242553 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:10.242559 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:10.242565 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.242570 | orchestrator | 2026-02-03 05:56:10.242576 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 05:56:10.242582 | orchestrator | Tuesday 03 February 2026 05:55:56 +0000 (0:00:02.230) 0:01:09.759 ****** 2026-02-03 05:56:10.242588 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:10.242595 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:10.242600 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:10.242606 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:10.242611 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:10.242617 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:10.242624 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:10.242634 | orchestrator | 2026-02-03 05:56:10.242640 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 05:56:10.242647 | orchestrator | Tuesday 03 February 2026 05:55:58 +0000 (0:00:02.368) 0:01:12.128 ****** 2026-02-03 05:56:10.242653 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:10.242658 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:10.242665 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:10.242671 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:10.242677 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:10.242683 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:10.242689 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.242695 | orchestrator | 2026-02-03 05:56:10.242701 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 05:56:10.242707 | orchestrator | Tuesday 03 February 2026 05:56:01 +0000 (0:00:02.478) 0:01:14.607 ****** 2026-02-03 05:56:10.242713 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 05:56:10.242719 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 05:56:10.242725 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 05:56:10.242731 | orchestrator | 2026-02-03 05:56:10.242737 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 05:56:10.242743 | orchestrator | Tuesday 03 February 2026 05:56:03 +0000 (0:00:01.731) 0:01:16.339 ****** 2026-02-03 05:56:10.242748 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:10.242754 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:10.242760 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:10.242766 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:10.242772 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:10.242778 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:10.242784 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:10.242790 | orchestrator | 2026-02-03 05:56:10.242796 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 05:56:10.242802 | orchestrator | Tuesday 03 February 2026 05:56:05 +0000 (0:00:02.220) 0:01:18.560 ****** 2026-02-03 05:56:10.242809 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 05:56:10.242816 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 05:56:10.242834 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 05:56:10.242841 | orchestrator | 2026-02-03 05:56:10.242847 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 05:56:10.242853 | orchestrator | Tuesday 03 February 2026 05:56:08 +0000 (0:00:03.433) 0:01:21.994 ****** 2026-02-03 05:56:10.242865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 05:56:34.908013 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 05:56:34.908106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 05:56:34.908123 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:34.908137 | orchestrator | 2026-02-03 05:56:34.908152 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 05:56:34.908187 | orchestrator | Tuesday 03 February 2026 05:56:10 +0000 (0:00:01.422) 0:01:23.416 ****** 2026-02-03 05:56:34.908203 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 05:56:34.908218 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 05:56:34.908231 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 05:56:34.908275 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:34.908290 | orchestrator | 2026-02-03 05:56:34.908303 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 05:56:34.908316 | orchestrator | Tuesday 03 February 2026 05:56:12 +0000 (0:00:02.058) 0:01:25.475 ****** 2026-02-03 05:56:34.908329 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:34.908340 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:34.908348 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:34.908356 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:34.908364 | orchestrator | 2026-02-03 05:56:34.908372 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 05:56:34.908380 | orchestrator | Tuesday 03 February 2026 05:56:13 +0000 (0:00:01.227) 0:01:26.702 ****** 2026-02-03 05:56:34.908390 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f906be70bf4b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 05:56:06.109636', 'end': '2026-02-03 05:56:06.156636', 'delta': '0:00:00.047000', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f906be70bf4b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 05:56:34.908432 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9e707d2df2a9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 05:56:06.948598', 'end': '2026-02-03 05:56:07.003665', 'delta': '0:00:00.055067', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9e707d2df2a9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 05:56:34.908442 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7edf8d69a692', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 05:56:07.545468', 'end': '2026-02-03 05:56:07.588970', 'delta': '0:00:00.043502', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7edf8d69a692'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 05:56:34.908457 | orchestrator | 2026-02-03 05:56:34.908465 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 05:56:34.908473 | orchestrator | Tuesday 03 February 2026 05:56:14 +0000 (0:00:01.308) 0:01:28.010 ****** 2026-02-03 05:56:34.908481 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:34.908490 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:34.908499 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:34.908507 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:34.908515 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:34.908523 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:34.908531 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:34.908538 | orchestrator | 2026-02-03 05:56:34.908547 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 05:56:34.908554 | orchestrator | Tuesday 03 February 2026 05:56:17 +0000 (0:00:02.750) 0:01:30.761 ****** 2026-02-03 05:56:34.908562 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:34.908570 | orchestrator | 2026-02-03 05:56:34.908578 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 05:56:34.908586 | orchestrator | Tuesday 03 February 2026 05:56:18 +0000 (0:00:01.296) 0:01:32.057 ****** 2026-02-03 05:56:34.908594 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:34.908602 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:34.908610 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:34.908618 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:34.908625 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:34.908633 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:34.908641 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:34.908649 | orchestrator | 2026-02-03 05:56:34.908657 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 05:56:34.908665 | orchestrator | Tuesday 03 February 2026 05:56:21 +0000 (0:00:02.454) 0:01:34.512 ****** 2026-02-03 05:56:34.908673 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:34.908681 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-03 05:56:34.908689 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-03 05:56:34.908697 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 05:56:34.908704 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-03 05:56:34.908712 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-03 05:56:34.908720 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-03 05:56:34.908728 | orchestrator | 2026-02-03 05:56:34.908736 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 05:56:34.908744 | orchestrator | Tuesday 03 February 2026 05:56:24 +0000 (0:00:03.411) 0:01:37.924 ****** 2026-02-03 05:56:34.908752 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:56:34.908760 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:56:34.908767 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:56:34.908775 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:56:34.908783 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:56:34.908791 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:56:34.908799 | orchestrator | ok: [testbed-manager] 2026-02-03 05:56:34.908806 | orchestrator | 2026-02-03 05:56:34.908814 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 05:56:34.908822 | orchestrator | Tuesday 03 February 2026 05:56:27 +0000 (0:00:02.587) 0:01:40.512 ****** 2026-02-03 05:56:34.908830 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:34.908838 | orchestrator | 2026-02-03 05:56:34.908846 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 05:56:34.908854 | orchestrator | Tuesday 03 February 2026 05:56:28 +0000 (0:00:01.228) 0:01:41.741 ****** 2026-02-03 05:56:34.908867 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:34.908875 | orchestrator | 2026-02-03 05:56:34.908883 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 05:56:34.908891 | orchestrator | Tuesday 03 February 2026 05:56:29 +0000 (0:00:01.293) 0:01:43.034 ****** 2026-02-03 05:56:34.908899 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:34.908906 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:34.908914 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:34.908922 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:34.908930 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:34.908938 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:34.908946 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:34.908954 | orchestrator | 2026-02-03 05:56:34.908962 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 05:56:34.908973 | orchestrator | Tuesday 03 February 2026 05:56:32 +0000 (0:00:02.762) 0:01:45.797 ****** 2026-02-03 05:56:34.908981 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:34.908989 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:34.908997 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:34.909005 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:34.909013 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:34.909021 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:34.909034 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:47.424013 | orchestrator | 2026-02-03 05:56:47.424102 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 05:56:47.424112 | orchestrator | Tuesday 03 February 2026 05:56:34 +0000 (0:00:02.285) 0:01:48.082 ****** 2026-02-03 05:56:47.424118 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:47.424124 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:47.424130 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:47.424136 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:47.424142 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:47.424147 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:47.424153 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:47.424158 | orchestrator | 2026-02-03 05:56:47.424164 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 05:56:47.424204 | orchestrator | Tuesday 03 February 2026 05:56:37 +0000 (0:00:02.768) 0:01:50.851 ****** 2026-02-03 05:56:47.424210 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:47.424215 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:47.424221 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:47.424226 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:47.424233 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:47.424238 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:47.424244 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:47.424249 | orchestrator | 2026-02-03 05:56:47.424255 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 05:56:47.424260 | orchestrator | Tuesday 03 February 2026 05:56:40 +0000 (0:00:02.382) 0:01:53.234 ****** 2026-02-03 05:56:47.424266 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:47.424272 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:47.424277 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:47.424282 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:47.424288 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:47.424293 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:47.424299 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:47.424306 | orchestrator | 2026-02-03 05:56:47.424316 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 05:56:47.424326 | orchestrator | Tuesday 03 February 2026 05:56:42 +0000 (0:00:02.497) 0:01:55.732 ****** 2026-02-03 05:56:47.424335 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:47.424344 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:47.424384 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:47.424391 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:47.424397 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:47.424402 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:47.424408 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:47.424413 | orchestrator | 2026-02-03 05:56:47.424419 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 05:56:47.424425 | orchestrator | Tuesday 03 February 2026 05:56:44 +0000 (0:00:02.314) 0:01:58.047 ****** 2026-02-03 05:56:47.424430 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:47.424435 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:47.424441 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:47.424446 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:47.424452 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:47.424457 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:47.424462 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:47.424468 | orchestrator | 2026-02-03 05:56:47.424473 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 05:56:47.424479 | orchestrator | Tuesday 03 February 2026 05:56:47 +0000 (0:00:02.376) 0:02:00.423 ****** 2026-02-03 05:56:47.424486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.424495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.424501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.424532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 05:56:47.424540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.424546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.424559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.424568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8b2ebf21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:47.424579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.424590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776283 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:47.776374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 05:56:47.776433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '24352e15', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:47.776505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776530 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:47.776544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:47.776584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 05:56:47.776610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.124014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.124131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.124160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5699a710', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:48.124248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.124263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.124320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.124335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'uuids': ['027247ae-00a3-443e-9633-8d8391a7da1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T']}})  2026-02-03 05:56:48.124349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '30942d1f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:48.124363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29']}})  2026-02-03 05:56:48.124376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.124388 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:48.124401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.124418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 05:56:48.124454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.428639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp', 'dm-uuid-CRYPT-LUKS2-51cdba44ba2f44e4a9ba680ba42622f2-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 05:56:48.428767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.428788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'uuids': ['51cdba44-ba2f-44e4-a9ba-680ba42622f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp']}})  2026-02-03 05:56:48.428803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd']}})  2026-02-03 05:56:48.428815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.428870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26fa6d1d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:48.428911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.428925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.428945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T', 'dm-uuid-CRYPT-LUKS2-027247ae00a3443e96338d8391a7da1a-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 05:56:48.428968 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:48.428990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.429018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'uuids': ['ee84a40a-c8f5-4363-8b92-865eb14b3049'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt']}})  2026-02-03 05:56:48.429072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15b94581', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:48.553987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362']}})  2026-02-03 05:56:48.554253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.554279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.554292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 05:56:48.554305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.554356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun', 'dm-uuid-CRYPT-LUKS2-1805b057808e47489bd25959cb85c8e5-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 05:56:48.554369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.554381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.554414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'uuids': ['1805b057-808e-4748-9bd2-5959cb85c8e5'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun']}})  2026-02-03 05:56:48.554427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291']}})  2026-02-03 05:56:48.554439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'uuids': ['de4b76bf-9af2-40ae-a6b3-4edbecd71396'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a']}})  2026-02-03 05:56:48.554451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.554496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9ac79520', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:48.736883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.736985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ed5f26b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:48.737003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.737017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb']}})  2026-02-03 05:56:48.737068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt', 'dm-uuid-CRYPT-LUKS2-ee84a40ac8f543638b92865eb14b3049-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 05:56:48.737082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.737094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.737124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 05:56:48.737137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.737149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca', 'dm-uuid-CRYPT-LUKS2-828a04c154134531b57bb1d5e612c63b-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 05:56:48.737160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.737256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'uuids': ['828a04c1-5413-4531-b57b-b1d5e612c63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca']}})  2026-02-03 05:56:48.737278 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:48.737300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8']}})  2026-02-03 05:56:48.737320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:48.737357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e34e583', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:50.261686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a', 'dm-uuid-CRYPT-LUKS2-de4b76bf9af240aea6b34edbecd71396-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261785 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:56:50.261792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261805 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261811 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-25-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 05:56:50.261817 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261855 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261863 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261874 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'baaa19a3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part16', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part14', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part15', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part1', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 05:56:50.261882 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261888 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 05:56:50.261899 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:50.261906 | orchestrator | 2026-02-03 05:56:50.261912 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 05:56:50.261919 | orchestrator | Tuesday 03 February 2026 05:56:50 +0000 (0:00:02.810) 0:02:03.234 ****** 2026-02-03 05:56:50.261932 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450264 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450373 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450390 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450403 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450415 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450448 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450497 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8b2ebf21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450523 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450556 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450570 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.450603 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656611 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656715 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656732 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656745 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656781 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656834 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '24352e15', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656850 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656869 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656882 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:56:50.656895 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656907 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.656933 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.932876 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.932970 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.933016 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.933032 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.933048 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:56:50.933093 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5699a710', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.933104 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.933119 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.933128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.933142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'uuids': ['027247ae-00a3-443e-9633-8d8391a7da1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:50.933157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '30942d1f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321107 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:56:51.321203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321231 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321276 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321281 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp', 'dm-uuid-CRYPT-LUKS2-51cdba44ba2f44e4a9ba680ba42622f2-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321289 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'uuids': ['ee84a40a-c8f5-4363-8b92-865eb14b3049'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321295 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15b94581', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.321311 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579410 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'uuids': ['51cdba44-ba2f-44e4-a9ba-680ba42622f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579429 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579458 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579507 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579550 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun', 'dm-uuid-CRYPT-LUKS2-1805b057808e47489bd25959cb85c8e5-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26fa6d1d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.579596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'uuids': ['1805b057-808e-4748-9bd2-5959cb85c8e5'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674430 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9ac79520', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674538 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674637 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T', 'dm-uuid-CRYPT-LUKS2-027247ae00a3443e96338d8391a7da1a-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.674689 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.833763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt', 'dm-uuid-CRYPT-LUKS2-ee84a40ac8f543638b92865eb14b3049-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.833894 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:56:51.833920 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:56:51.833940 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'uuids': ['de4b76bf-9af2-40ae-a6b3-4edbecd71396'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.833980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ed5f26b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.834084 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.834131 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.834156 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.834201 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.834214 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.834234 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-25-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.834257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.834277 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.834307 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960095 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960261 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960304 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'baaa19a3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part16', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part14', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part15', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part1', 'scsi-SQEMU_QEMU_HARDDISK_baaa19a3-0465-4ace-be40-0edae040cc8f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960378 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960390 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca', 'dm-uuid-CRYPT-LUKS2-828a04c154134531b57bb1d5e612c63b-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960429 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:56:51.960442 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960456 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'uuids': ['828a04c1-5413-4531-b57b-b1d5e612c63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:56:51.960477 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8']}}, 'ansible_loop_var': 'item'})  2026-02-03 05:57:12.330404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:57:12.330558 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e34e583', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:57:12.331357 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:57:12.331401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:57:12.331415 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a', 'dm-uuid-CRYPT-LUKS2-de4b76bf9af240aea6b34edbecd71396-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 05:57:12.331436 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:57:12.331450 | orchestrator | 2026-02-03 05:57:12.331470 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 05:57:12.331483 | orchestrator | Tuesday 03 February 2026 05:56:53 +0000 (0:00:03.300) 0:02:06.535 ****** 2026-02-03 05:57:12.331494 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:57:12.331505 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:57:12.331516 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:57:12.331527 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:57:12.331538 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:57:12.331548 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:57:12.331559 | orchestrator | ok: [testbed-manager] 2026-02-03 05:57:12.331570 | orchestrator | 2026-02-03 05:57:12.331581 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 05:57:12.331592 | orchestrator | Tuesday 03 February 2026 05:56:56 +0000 (0:00:03.111) 0:02:09.647 ****** 2026-02-03 05:57:12.331603 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:57:12.331613 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:57:12.331624 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:57:12.331635 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:57:12.331646 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:57:12.331657 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:57:12.331668 | orchestrator | ok: [testbed-manager] 2026-02-03 05:57:12.331679 | orchestrator | 2026-02-03 05:57:12.331690 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 05:57:12.331701 | orchestrator | Tuesday 03 February 2026 05:56:58 +0000 (0:00:01.994) 0:02:11.641 ****** 2026-02-03 05:57:12.331712 | orchestrator | ok: [testbed-node-0] 2026-02-03 05:57:12.331723 | orchestrator | ok: [testbed-node-1] 2026-02-03 05:57:12.331733 | orchestrator | ok: [testbed-node-2] 2026-02-03 05:57:12.331744 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:57:12.331755 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:57:12.331766 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:57:12.331776 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:57:12.331787 | orchestrator | 2026-02-03 05:57:12.331798 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 05:57:12.331809 | orchestrator | Tuesday 03 February 2026 05:57:01 +0000 (0:00:03.300) 0:02:14.942 ****** 2026-02-03 05:57:12.331820 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:57:12.331831 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:57:12.331842 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:57:12.331852 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:12.331863 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:57:12.331874 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:57:12.331884 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:57:12.331895 | orchestrator | 2026-02-03 05:57:12.331906 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 05:57:12.331917 | orchestrator | Tuesday 03 February 2026 05:57:03 +0000 (0:00:01.985) 0:02:16.927 ****** 2026-02-03 05:57:12.331928 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:57:12.331938 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:57:12.331949 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:57:12.331960 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:12.331970 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:57:12.331981 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:57:12.331992 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-03 05:57:12.332003 | orchestrator | 2026-02-03 05:57:12.332014 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 05:57:12.332025 | orchestrator | Tuesday 03 February 2026 05:57:06 +0000 (0:00:03.007) 0:02:19.934 ****** 2026-02-03 05:57:12.332035 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:57:12.332053 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:57:12.332064 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:57:12.332074 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:12.332085 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:57:12.332096 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:57:12.332107 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:57:12.332117 | orchestrator | 2026-02-03 05:57:12.332128 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 05:57:12.332139 | orchestrator | Tuesday 03 February 2026 05:57:08 +0000 (0:00:01.987) 0:02:21.922 ****** 2026-02-03 05:57:12.332150 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 05:57:12.332161 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-03 05:57:12.332236 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-03 05:57:12.332252 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-03 05:57:12.332263 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 05:57:12.332274 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-03 05:57:12.332285 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-03 05:57:12.332296 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-03 05:57:12.332306 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-03 05:57:12.332318 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-03 05:57:12.332337 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-03 05:57:51.118113 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 05:57:51.118241 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-03 05:57:51.118249 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-03 05:57:51.118255 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-03 05:57:51.118261 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-03 05:57:51.118266 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-03 05:57:51.118272 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-03 05:57:51.118277 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-03 05:57:51.118283 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-03 05:57:51.118288 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-03 05:57:51.118293 | orchestrator | 2026-02-03 05:57:51.118300 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 05:57:51.118306 | orchestrator | Tuesday 03 February 2026 05:57:12 +0000 (0:00:03.581) 0:02:25.503 ****** 2026-02-03 05:57:51.118312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 05:57:51.118331 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 05:57:51.118336 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 05:57:51.118341 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:57:51.118347 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 05:57:51.118352 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 05:57:51.118357 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 05:57:51.118362 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:57:51.118367 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 05:57:51.118372 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 05:57:51.118377 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 05:57:51.118382 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:57:51.118387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 05:57:51.118392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 05:57:51.118398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 05:57:51.118403 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:51.118426 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 05:57:51.118431 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 05:57:51.118436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 05:57:51.118441 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-03 05:57:51.118446 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-03 05:57:51.118451 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-03 05:57:51.118456 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:57:51.118461 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:57:51.118466 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-03 05:57:51.118471 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-03 05:57:51.118476 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-03 05:57:51.118481 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:57:51.118487 | orchestrator | 2026-02-03 05:57:51.118492 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 05:57:51.118497 | orchestrator | Tuesday 03 February 2026 05:57:14 +0000 (0:00:02.391) 0:02:27.894 ****** 2026-02-03 05:57:51.118502 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:57:51.118507 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:57:51.118512 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:57:51.118517 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:57:51.118523 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 05:57:51.118528 | orchestrator | 2026-02-03 05:57:51.118533 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 05:57:51.118540 | orchestrator | Tuesday 03 February 2026 05:57:16 +0000 (0:00:02.192) 0:02:30.086 ****** 2026-02-03 05:57:51.118545 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:51.118550 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:57:51.118555 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:57:51.118560 | orchestrator | 2026-02-03 05:57:51.118565 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 05:57:51.118570 | orchestrator | Tuesday 03 February 2026 05:57:18 +0000 (0:00:01.769) 0:02:31.856 ****** 2026-02-03 05:57:51.118575 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:51.118580 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:57:51.118585 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:57:51.118590 | orchestrator | 2026-02-03 05:57:51.118595 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 05:57:51.118601 | orchestrator | Tuesday 03 February 2026 05:57:20 +0000 (0:00:01.501) 0:02:33.357 ****** 2026-02-03 05:57:51.118607 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:51.118613 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:57:51.118619 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:57:51.118624 | orchestrator | 2026-02-03 05:57:51.118630 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 05:57:51.118636 | orchestrator | Tuesday 03 February 2026 05:57:21 +0000 (0:00:01.439) 0:02:34.796 ****** 2026-02-03 05:57:51.118642 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:57:51.118648 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:57:51.118654 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:57:51.118659 | orchestrator | 2026-02-03 05:57:51.118665 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 05:57:51.118683 | orchestrator | Tuesday 03 February 2026 05:57:23 +0000 (0:00:01.502) 0:02:36.299 ****** 2026-02-03 05:57:51.118689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 05:57:51.118695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 05:57:51.118701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 05:57:51.118711 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:51.118718 | orchestrator | 2026-02-03 05:57:51.118723 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 05:57:51.118729 | orchestrator | Tuesday 03 February 2026 05:57:24 +0000 (0:00:01.777) 0:02:38.076 ****** 2026-02-03 05:57:51.118735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 05:57:51.118741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 05:57:51.118747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 05:57:51.118753 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:51.118758 | orchestrator | 2026-02-03 05:57:51.118763 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 05:57:51.118768 | orchestrator | Tuesday 03 February 2026 05:57:26 +0000 (0:00:01.804) 0:02:39.880 ****** 2026-02-03 05:57:51.118773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 05:57:51.118781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 05:57:51.118786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 05:57:51.118791 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:51.118796 | orchestrator | 2026-02-03 05:57:51.118801 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 05:57:51.118806 | orchestrator | Tuesday 03 February 2026 05:57:28 +0000 (0:00:01.811) 0:02:41.692 ****** 2026-02-03 05:57:51.118811 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:57:51.118817 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:57:51.118822 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:57:51.118827 | orchestrator | 2026-02-03 05:57:51.118832 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 05:57:51.118837 | orchestrator | Tuesday 03 February 2026 05:57:29 +0000 (0:00:01.425) 0:02:43.117 ****** 2026-02-03 05:57:51.118842 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 05:57:51.118847 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 05:57:51.118852 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 05:57:51.118857 | orchestrator | 2026-02-03 05:57:51.118862 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 05:57:51.118867 | orchestrator | Tuesday 03 February 2026 05:57:31 +0000 (0:00:01.768) 0:02:44.886 ****** 2026-02-03 05:57:51.118872 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 05:57:51.118877 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 05:57:51.118883 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 05:57:51.118888 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 05:57:51.118893 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 05:57:51.118898 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 05:57:51.118903 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 05:57:51.118908 | orchestrator | 2026-02-03 05:57:51.118913 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 05:57:51.118918 | orchestrator | Tuesday 03 February 2026 05:57:33 +0000 (0:00:02.247) 0:02:47.134 ****** 2026-02-03 05:57:51.118923 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 05:57:51.118928 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 05:57:51.118933 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 05:57:51.118938 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 05:57:51.118943 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 05:57:51.118948 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 05:57:51.118957 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 05:57:51.118962 | orchestrator | 2026-02-03 05:57:51.118967 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-03 05:57:51.118972 | orchestrator | Tuesday 03 February 2026 05:57:37 +0000 (0:00:03.227) 0:02:50.361 ****** 2026-02-03 05:57:51.118978 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:57:51.118983 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:57:51.118988 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:57:51.118993 | orchestrator | changed: [testbed-manager] 2026-02-03 05:57:51.118998 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:57:51.119003 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:57:51.119008 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:57:51.119013 | orchestrator | 2026-02-03 05:57:51.119018 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-03 05:57:51.119023 | orchestrator | Tuesday 03 February 2026 05:57:48 +0000 (0:00:11.622) 0:03:01.984 ****** 2026-02-03 05:57:51.119028 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:57:51.119033 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:57:51.119038 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:57:51.119043 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:57:51.119048 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:57:51.119053 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:57:51.119058 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:57:51.119063 | orchestrator | 2026-02-03 05:57:51.119068 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-03 05:57:51.119076 | orchestrator | Tuesday 03 February 2026 05:57:51 +0000 (0:00:02.303) 0:03:04.287 ****** 2026-02-03 05:58:31.399507 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.399653 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.399681 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.399703 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.399722 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.399743 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.399764 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.399785 | orchestrator | 2026-02-03 05:58:31.399809 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-03 05:58:31.399831 | orchestrator | Tuesday 03 February 2026 05:57:53 +0000 (0:00:02.047) 0:03:06.334 ****** 2026-02-03 05:58:31.399851 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.399871 | orchestrator | changed: [testbed-node-0] 2026-02-03 05:58:31.399893 | orchestrator | changed: [testbed-node-2] 2026-02-03 05:58:31.399914 | orchestrator | changed: [testbed-node-1] 2026-02-03 05:58:31.399927 | orchestrator | changed: [testbed-node-3] 2026-02-03 05:58:31.399938 | orchestrator | changed: [testbed-node-4] 2026-02-03 05:58:31.399958 | orchestrator | changed: [testbed-node-5] 2026-02-03 05:58:31.399978 | orchestrator | 2026-02-03 05:58:31.399996 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-03 05:58:31.400036 | orchestrator | Tuesday 03 February 2026 05:57:56 +0000 (0:00:03.526) 0:03:09.861 ****** 2026-02-03 05:58:31.400058 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-03 05:58:31.400080 | orchestrator | 2026-02-03 05:58:31.400100 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-03 05:58:31.400118 | orchestrator | Tuesday 03 February 2026 05:57:59 +0000 (0:00:03.271) 0:03:13.133 ****** 2026-02-03 05:58:31.400136 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.400154 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.400173 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.400221 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.400241 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.400291 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.400312 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.400332 | orchestrator | 2026-02-03 05:58:31.400351 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-03 05:58:31.400370 | orchestrator | Tuesday 03 February 2026 05:58:01 +0000 (0:00:01.991) 0:03:15.124 ****** 2026-02-03 05:58:31.400390 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.400410 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.400430 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.400449 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.400467 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.400486 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.400505 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.400524 | orchestrator | 2026-02-03 05:58:31.400543 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-03 05:58:31.400557 | orchestrator | Tuesday 03 February 2026 05:58:04 +0000 (0:00:02.344) 0:03:17.469 ****** 2026-02-03 05:58:31.400568 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.400579 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.400590 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.400601 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.400615 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.400633 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.400652 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.400671 | orchestrator | 2026-02-03 05:58:31.400684 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-03 05:58:31.400696 | orchestrator | Tuesday 03 February 2026 05:58:06 +0000 (0:00:02.168) 0:03:19.637 ****** 2026-02-03 05:58:31.400707 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.400717 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.400728 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.400738 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.400749 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.400766 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.400785 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.400803 | orchestrator | 2026-02-03 05:58:31.400821 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-03 05:58:31.400837 | orchestrator | Tuesday 03 February 2026 05:58:08 +0000 (0:00:02.322) 0:03:21.959 ****** 2026-02-03 05:58:31.400848 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.400859 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.400870 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.400881 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.400891 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.400902 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.400913 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.400924 | orchestrator | 2026-02-03 05:58:31.400935 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-03 05:58:31.400946 | orchestrator | Tuesday 03 February 2026 05:58:10 +0000 (0:00:02.079) 0:03:24.039 ****** 2026-02-03 05:58:31.400957 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.400968 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.400979 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.400989 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.401000 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.401011 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.401022 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.401033 | orchestrator | 2026-02-03 05:58:31.401044 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-03 05:58:31.401055 | orchestrator | Tuesday 03 February 2026 05:58:13 +0000 (0:00:02.397) 0:03:26.437 ****** 2026-02-03 05:58:31.401066 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.401088 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.401099 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.401110 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.401121 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.401132 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.401143 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.401154 | orchestrator | 2026-02-03 05:58:31.401236 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-03 05:58:31.401258 | orchestrator | Tuesday 03 February 2026 05:58:15 +0000 (0:00:02.295) 0:03:28.732 ****** 2026-02-03 05:58:31.401277 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.401295 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.401314 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.401334 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.401352 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.401366 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.401384 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.401401 | orchestrator | 2026-02-03 05:58:31.401419 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-03 05:58:31.401438 | orchestrator | Tuesday 03 February 2026 05:58:18 +0000 (0:00:02.473) 0:03:31.206 ****** 2026-02-03 05:58:31.401456 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.401475 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.401486 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.401497 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.401507 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.401518 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.401529 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.401547 | orchestrator | 2026-02-03 05:58:31.401574 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-03 05:58:31.401594 | orchestrator | Tuesday 03 February 2026 05:58:20 +0000 (0:00:02.217) 0:03:33.423 ****** 2026-02-03 05:58:31.401613 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.401633 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.401650 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.401667 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.401686 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.401705 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.401722 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.401741 | orchestrator | 2026-02-03 05:58:31.401759 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-03 05:58:31.401778 | orchestrator | Tuesday 03 February 2026 05:58:22 +0000 (0:00:02.014) 0:03:35.438 ****** 2026-02-03 05:58:31.401796 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.401809 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.401820 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.401831 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.401841 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.401852 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.401863 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.401874 | orchestrator | 2026-02-03 05:58:31.401884 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-03 05:58:31.401897 | orchestrator | Tuesday 03 February 2026 05:58:24 +0000 (0:00:02.241) 0:03:37.679 ****** 2026-02-03 05:58:31.401916 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.401934 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.401952 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.401971 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.401988 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.402007 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.402083 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.402096 | orchestrator | 2026-02-03 05:58:31.402107 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-03 05:58:31.402130 | orchestrator | Tuesday 03 February 2026 05:58:26 +0000 (0:00:02.224) 0:03:39.903 ****** 2026-02-03 05:58:31.402141 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.402152 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.402163 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.402176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 05:58:31.402281 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 05:58:31.402298 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.402310 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 05:58:31.402321 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 05:58:31.402332 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.402343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 05:58:31.402353 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 05:58:31.402364 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.402375 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.402386 | orchestrator | 2026-02-03 05:58:31.402397 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-03 05:58:31.402408 | orchestrator | Tuesday 03 February 2026 05:58:29 +0000 (0:00:02.347) 0:03:42.251 ****** 2026-02-03 05:58:31.402418 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:31.402429 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:31.402440 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:31.402450 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:31.402461 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:31.402472 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:31.402483 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:31.402494 | orchestrator | 2026-02-03 05:58:31.402505 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-03 05:58:31.402528 | orchestrator | Tuesday 03 February 2026 05:58:31 +0000 (0:00:02.318) 0:03:44.570 ****** 2026-02-03 05:58:58.580051 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:58.580167 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:58.580182 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:58.580251 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.580271 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.580290 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.580308 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:58.580327 | orchestrator | 2026-02-03 05:58:58.580346 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-03 05:58:58.580366 | orchestrator | Tuesday 03 February 2026 05:58:33 +0000 (0:00:02.381) 0:03:46.952 ****** 2026-02-03 05:58:58.580384 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:58.580402 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:58.580420 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:58.580437 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.580453 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.580469 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.580484 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:58.580503 | orchestrator | 2026-02-03 05:58:58.580520 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-03 05:58:58.580588 | orchestrator | Tuesday 03 February 2026 05:58:35 +0000 (0:00:02.048) 0:03:49.000 ****** 2026-02-03 05:58:58.580610 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:58.580628 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:58.580646 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:58.580664 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.580682 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.580700 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.580717 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:58.580734 | orchestrator | 2026-02-03 05:58:58.580753 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-03 05:58:58.580772 | orchestrator | Tuesday 03 February 2026 05:58:38 +0000 (0:00:02.301) 0:03:51.301 ****** 2026-02-03 05:58:58.580791 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:58.580810 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:58.580827 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:58.580846 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.580866 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.580885 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.580903 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:58.580923 | orchestrator | 2026-02-03 05:58:58.580943 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-03 05:58:58.580961 | orchestrator | Tuesday 03 February 2026 05:58:40 +0000 (0:00:02.393) 0:03:53.695 ****** 2026-02-03 05:58:58.580978 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:58.580994 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:58.581012 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:58.581030 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.581050 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.581069 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.581087 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:58.581105 | orchestrator | 2026-02-03 05:58:58.581122 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-03 05:58:58.581138 | orchestrator | Tuesday 03 February 2026 05:58:42 +0000 (0:00:02.109) 0:03:55.805 ****** 2026-02-03 05:58:58.581152 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:58:58.581168 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:58:58.581183 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:58:58.581232 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:58:58.581250 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 05:58:58.581265 | orchestrator | 2026-02-03 05:58:58.581284 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-03 05:58:58.581299 | orchestrator | Tuesday 03 February 2026 05:58:45 +0000 (0:00:02.692) 0:03:58.497 ****** 2026-02-03 05:58:58.581317 | orchestrator | ok: [testbed-node-3] 2026-02-03 05:58:58.581336 | orchestrator | ok: [testbed-node-4] 2026-02-03 05:58:58.581355 | orchestrator | ok: [testbed-node-5] 2026-02-03 05:58:58.581373 | orchestrator | 2026-02-03 05:58:58.581390 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-03 05:58:58.581406 | orchestrator | Tuesday 03 February 2026 05:58:46 +0000 (0:00:01.466) 0:03:59.964 ****** 2026-02-03 05:58:58.581424 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 05:58:58.581443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 05:58:58.581461 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.581478 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 05:58:58.581494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 05:58:58.581535 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.581553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 05:58:58.581570 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 05:58:58.581587 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.581604 | orchestrator | 2026-02-03 05:58:58.581621 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-03 05:58:58.581669 | orchestrator | Tuesday 03 February 2026 05:58:48 +0000 (0:00:01.512) 0:04:01.477 ****** 2026-02-03 05:58:58.581692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}, 'ansible_loop_var': 'item'})  2026-02-03 05:58:58.581713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}, 'ansible_loop_var': 'item'})  2026-02-03 05:58:58.581733 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.581764 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'}, 'ansible_loop_var': 'item'})  2026-02-03 05:58:58.581784 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'}, 'ansible_loop_var': 'item'})  2026-02-03 05:58:58.581803 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.581822 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'}, 'ansible_loop_var': 'item'})  2026-02-03 05:58:58.581839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}, 'ansible_loop_var': 'item'})  2026-02-03 05:58:58.581857 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.581875 | orchestrator | 2026-02-03 05:58:58.581893 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-03 05:58:58.581911 | orchestrator | Tuesday 03 February 2026 05:58:50 +0000 (0:00:01.992) 0:04:03.470 ****** 2026-02-03 05:58:58.581929 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.581945 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.581963 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.581980 | orchestrator | 2026-02-03 05:58:58.581998 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-03 05:58:58.582087 | orchestrator | Tuesday 03 February 2026 05:58:51 +0000 (0:00:01.420) 0:04:04.890 ****** 2026-02-03 05:58:58.582112 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.582131 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.582168 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.582188 | orchestrator | 2026-02-03 05:58:58.582252 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-03 05:58:58.582271 | orchestrator | Tuesday 03 February 2026 05:58:53 +0000 (0:00:01.497) 0:04:06.387 ****** 2026-02-03 05:58:58.582289 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.582307 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.582325 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.582342 | orchestrator | 2026-02-03 05:58:58.582359 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-03 05:58:58.582376 | orchestrator | Tuesday 03 February 2026 05:58:54 +0000 (0:00:01.500) 0:04:07.888 ****** 2026-02-03 05:58:58.582394 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:58:58.582411 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:58:58.582428 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:58:58.582446 | orchestrator | 2026-02-03 05:58:58.582464 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-03 05:58:58.582483 | orchestrator | Tuesday 03 February 2026 05:58:56 +0000 (0:00:01.676) 0:04:09.565 ****** 2026-02-03 05:58:58.582502 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}) 2026-02-03 05:58:58.582522 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'}) 2026-02-03 05:58:58.582545 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'}) 2026-02-03 05:58:58.582563 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}) 2026-02-03 05:58:58.582600 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'}) 2026-02-03 05:59:00.435913 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}) 2026-02-03 05:59:00.436017 | orchestrator | 2026-02-03 05:59:00.436033 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-03 05:59:00.436047 | orchestrator | Tuesday 03 February 2026 05:58:58 +0000 (0:00:02.181) 0:04:11.747 ****** 2026-02-03 05:59:00.436082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29/osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1770090292.8294127, 'mtime': 1770090292.8264127, 'ctime': 1770090292.8264127, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29/osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:00.436099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd/osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1770090311.6725903, 'mtime': 1770090311.66859, 'ctime': 1770090311.66859, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd/osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:00.436139 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:00.436172 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-121565c5-01e5-5794-959e-075d91e35362/osd-block-121565c5-01e5-5794-959e-075d91e35362', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770090293.3965538, 'mtime': 1770090293.3895538, 'ctime': 1770090293.3895538, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-121565c5-01e5-5794-959e-075d91e35362/osd-block-121565c5-01e5-5794-959e-075d91e35362', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:00.436266 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-1a37b12a-042e-589b-8d7d-13944ef33291/osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770090311.9397323, 'mtime': 1770090311.9327323, 'ctime': 1770090311.9327323, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-1a37b12a-042e-589b-8d7d-13944ef33291/osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:00.436284 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:00.436297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb/osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 949, 'dev': 6, 'nlink': 1, 'atime': 1770090295.8654869, 'mtime': 1770090295.8594868, 'ctime': 1770090295.8594868, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb/osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:00.436328 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8/osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 959, 'dev': 6, 'nlink': 1, 'atime': 1770090316.6646764, 'mtime': 1770090316.6586764, 'ctime': 1770090316.6586764, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8/osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.792867 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:11.792950 | orchestrator | 2026-02-03 05:59:11.792957 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-03 05:59:11.792963 | orchestrator | Tuesday 03 February 2026 05:59:00 +0000 (0:00:01.863) 0:04:13.610 ****** 2026-02-03 05:59:11.792968 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 05:59:11.792974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 05:59:11.792979 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:11.792996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 05:59:11.793000 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 05:59:11.793006 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:11.793012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 05:59:11.793018 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 05:59:11.793034 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:11.793038 | orchestrator | 2026-02-03 05:59:11.793043 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-03 05:59:11.793048 | orchestrator | Tuesday 03 February 2026 05:59:01 +0000 (0:00:01.528) 0:04:15.139 ****** 2026-02-03 05:59:11.793053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793063 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:11.793067 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793071 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793074 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:11.793078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793086 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:11.793090 | orchestrator | 2026-02-03 05:59:11.793094 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-03 05:59:11.793098 | orchestrator | Tuesday 03 February 2026 05:59:03 +0000 (0:00:01.706) 0:04:16.846 ****** 2026-02-03 05:59:11.793102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'})  2026-02-03 05:59:11.793105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'})  2026-02-03 05:59:11.793109 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:11.793123 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'})  2026-02-03 05:59:11.793127 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'})  2026-02-03 05:59:11.793131 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:11.793135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'})  2026-02-03 05:59:11.793145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'})  2026-02-03 05:59:11.793149 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:11.793153 | orchestrator | 2026-02-03 05:59:11.793156 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-03 05:59:11.793160 | orchestrator | Tuesday 03 February 2026 05:59:05 +0000 (0:00:01.746) 0:04:18.592 ****** 2026-02-03 05:59:11.793164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-85b6ff9c-bd3f-596f-9d81-0006b9d69e29', 'data_vg': 'ceph-85b6ff9c-bd3f-596f-9d81-0006b9d69e29'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-bafb60f3-a5a9-526b-adce-8ea58a9a19cd', 'data_vg': 'ceph-bafb60f3-a5a9-526b-adce-8ea58a9a19cd'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793172 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:11.793176 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-121565c5-01e5-5794-959e-075d91e35362', 'data_vg': 'ceph-121565c5-01e5-5794-959e-075d91e35362'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-1a37b12a-042e-589b-8d7d-13944ef33291', 'data_vg': 'ceph-1a37b12a-042e-589b-8d7d-13944ef33291'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793184 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:11.793188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-9cbb71d1-90c1-5063-b304-f845b9e79bfb', 'data_vg': 'ceph-9cbb71d1-90c1-5063-b304-f845b9e79bfb'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793209 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-77c51d77-cdc1-5563-af81-33d9bc4e9bd8', 'data_vg': 'ceph-77c51d77-cdc1-5563-af81-33d9bc4e9bd8'}, 'ansible_loop_var': 'item'})  2026-02-03 05:59:11.793213 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:11.793217 | orchestrator | 2026-02-03 05:59:11.793221 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-03 05:59:11.793225 | orchestrator | Tuesday 03 February 2026 05:59:06 +0000 (0:00:01.470) 0:04:20.063 ****** 2026-02-03 05:59:11.793229 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:11.793233 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:11.793237 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:11.793240 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:11.793244 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:11.793248 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:11.793252 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:11.793256 | orchestrator | 2026-02-03 05:59:11.793259 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-03 05:59:11.793263 | orchestrator | Tuesday 03 February 2026 05:59:08 +0000 (0:00:01.963) 0:04:22.026 ****** 2026-02-03 05:59:11.793267 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:11.793271 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:11.793275 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:11.793278 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:11.793286 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 05:59:11.793290 | orchestrator | 2026-02-03 05:59:11.793294 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-03 05:59:11.793298 | orchestrator | Tuesday 03 February 2026 05:59:11 +0000 (0:00:02.816) 0:04:24.843 ****** 2026-02-03 05:59:11.793305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709422 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:23.709436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709491 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:23.709501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709557 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:23.709568 | orchestrator | 2026-02-03 05:59:23.709580 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-03 05:59:23.709592 | orchestrator | Tuesday 03 February 2026 05:59:13 +0000 (0:00:01.561) 0:04:26.405 ****** 2026-02-03 05:59:23.709603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709680 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:23.709695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709757 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:23.709769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709852 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:23.709865 | orchestrator | 2026-02-03 05:59:23.709878 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-03 05:59:23.709891 | orchestrator | Tuesday 03 February 2026 05:59:15 +0000 (0:00:01.883) 0:04:28.288 ****** 2026-02-03 05:59:23.709909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709973 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:23.709985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.709997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.710010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.710093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.710105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.710116 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:23.710127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.710148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.710159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.710170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.710181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 05:59:23.710192 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:23.710282 | orchestrator | 2026-02-03 05:59:23.710293 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-03 05:59:23.710304 | orchestrator | Tuesday 03 February 2026 05:59:16 +0000 (0:00:01.601) 0:04:29.889 ****** 2026-02-03 05:59:23.710315 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:23.710326 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:23.710337 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:23.710348 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:23.710359 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:23.710369 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:23.710380 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:23.710391 | orchestrator | 2026-02-03 05:59:23.710402 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-03 05:59:23.710412 | orchestrator | Tuesday 03 February 2026 05:59:18 +0000 (0:00:02.058) 0:04:31.948 ****** 2026-02-03 05:59:23.710423 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:23.710434 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:23.710445 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:23.710455 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:23.710466 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:23.710476 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:23.710487 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:23.710497 | orchestrator | 2026-02-03 05:59:23.710508 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-03 05:59:23.710519 | orchestrator | Tuesday 03 February 2026 05:59:21 +0000 (0:00:02.396) 0:04:34.345 ****** 2026-02-03 05:59:23.710530 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:23.710540 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:23.710551 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:23.710561 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:23.710572 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:23.710582 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:23.710593 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:23.710604 | orchestrator | 2026-02-03 05:59:23.710615 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-03 05:59:23.710625 | orchestrator | Tuesday 03 February 2026 05:59:23 +0000 (0:00:02.290) 0:04:36.636 ****** 2026-02-03 05:59:23.710646 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:34.575924 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:34.576080 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:34.576099 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:34.576123 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:34.576135 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:34.576147 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:34.576158 | orchestrator | 2026-02-03 05:59:34.576171 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-03 05:59:34.576184 | orchestrator | Tuesday 03 February 2026 05:59:25 +0000 (0:00:02.141) 0:04:38.777 ****** 2026-02-03 05:59:34.576239 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:34.576279 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:34.576305 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:34.576316 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:34.576327 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:34.576338 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:34.576348 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:34.576359 | orchestrator | 2026-02-03 05:59:34.576370 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-03 05:59:34.576381 | orchestrator | Tuesday 03 February 2026 05:59:27 +0000 (0:00:02.192) 0:04:40.970 ****** 2026-02-03 05:59:34.576392 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:34.576402 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:34.576413 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:34.576423 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:34.576435 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:34.576449 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:34.576461 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:34.576473 | orchestrator | 2026-02-03 05:59:34.576486 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-03 05:59:34.576499 | orchestrator | Tuesday 03 February 2026 05:59:29 +0000 (0:00:01.984) 0:04:42.954 ****** 2026-02-03 05:59:34.576512 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:34.576525 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:34.576541 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:34.576559 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:34.576588 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:34.576609 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:34.576627 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:34.576645 | orchestrator | 2026-02-03 05:59:34.576663 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-03 05:59:34.576682 | orchestrator | Tuesday 03 February 2026 05:59:32 +0000 (0:00:02.331) 0:04:45.285 ****** 2026-02-03 05:59:34.576701 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:34.576722 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:34.576743 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:34.576764 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:34.576783 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:34.576806 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:34.576826 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:34.576844 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:34.576863 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:34.576882 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:34.576901 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:34.576937 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:34.576957 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:34.576975 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:34.577018 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:34.577040 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:34.577060 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:34.577089 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:34.577109 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:34.577127 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:34.577147 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:34.577166 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:34.577185 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:34.577230 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:34.577250 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:34.577269 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:34.577288 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:34.577307 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:34.577325 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:34.577344 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:34.577364 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:34.577385 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:34.577396 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:34.577407 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:34.577418 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:34.577428 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:34.577439 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:34.577453 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:34.577483 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:39.322129 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:39.322225 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:39.322245 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:39.322252 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:39.322256 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:39.322262 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:39.322266 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:39.322271 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:39.322275 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:39.322280 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:39.322285 | orchestrator | 2026-02-03 05:59:39.322290 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-03 05:59:39.322295 | orchestrator | Tuesday 03 February 2026 05:59:34 +0000 (0:00:02.463) 0:04:47.749 ****** 2026-02-03 05:59:39.322300 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:39.322304 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:39.322309 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:39.322313 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:39.322317 | orchestrator | skipping: [testbed-node-4] 2026-02-03 05:59:39.322322 | orchestrator | skipping: [testbed-node-5] 2026-02-03 05:59:39.322326 | orchestrator | skipping: [testbed-manager] 2026-02-03 05:59:39.322347 | orchestrator | 2026-02-03 05:59:39.322352 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-03 05:59:39.322356 | orchestrator | Tuesday 03 February 2026 05:59:36 +0000 (0:00:02.403) 0:04:50.153 ****** 2026-02-03 05:59:39.322361 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:39.322366 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:39.322370 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:39.322375 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:39.322380 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:39.322384 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:39.322389 | orchestrator | skipping: [testbed-node-0] 2026-02-03 05:59:39.322393 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:39.322397 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:39.322402 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:39.322406 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:39.322421 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:39.322425 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:39.322430 | orchestrator | skipping: [testbed-node-1] 2026-02-03 05:59:39.322437 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:39.322442 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:39.322446 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:39.322450 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:39.322455 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:39.322459 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:39.322469 | orchestrator | skipping: [testbed-node-2] 2026-02-03 05:59:39.322473 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:39.322478 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:39.322482 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:39.322486 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:39.322491 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 05:59:39.322495 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 05:59:39.322500 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:39.322505 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:39.322509 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:39.322514 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 05:59:39.322518 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:39.322523 | orchestrator | skipping: [testbed-node-3] 2026-02-03 05:59:39.322527 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 05:59:39.322531 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 05:59:39.322536 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-03 05:59:39.322544 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-03 06:00:23.870872 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 06:00:23.871072 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-03 06:00:23.871108 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-03 06:00:23.871122 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 06:00:23.871160 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 06:00:23.871173 | orchestrator | skipping: [testbed-manager] 2026-02-03 06:00:23.871186 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 06:00:23.871198 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-03 06:00:23.871280 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 06:00:23.871294 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:00:23.871305 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-03 06:00:23.871316 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:00:23.871328 | orchestrator | 2026-02-03 06:00:23.871340 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-03 06:00:23.871352 | orchestrator | Tuesday 03 February 2026 05:59:39 +0000 (0:00:02.342) 0:04:52.496 ****** 2026-02-03 06:00:23.871363 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:23.871374 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:00:23.871385 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:00:23.871396 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:00:23.871409 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:00:23.871423 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:00:23.871435 | orchestrator | skipping: [testbed-manager] 2026-02-03 06:00:23.871448 | orchestrator | 2026-02-03 06:00:23.871461 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-03 06:00:23.871473 | orchestrator | Tuesday 03 February 2026 05:59:41 +0000 (0:00:02.420) 0:04:54.916 ****** 2026-02-03 06:00:23.871487 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:23.871500 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:00:23.871513 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:00:23.871526 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:00:23.871539 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:00:23.871552 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:00:23.871565 | orchestrator | skipping: [testbed-manager] 2026-02-03 06:00:23.871579 | orchestrator | 2026-02-03 06:00:23.871597 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-03 06:00:23.871625 | orchestrator | Tuesday 03 February 2026 05:59:44 +0000 (0:00:02.299) 0:04:57.216 ****** 2026-02-03 06:00:23.871647 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:23.871666 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:00:23.871684 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:00:23.871701 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:00:23.871720 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:00:23.871739 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:00:23.871759 | orchestrator | skipping: [testbed-manager] 2026-02-03 06:00:23.871778 | orchestrator | 2026-02-03 06:00:23.871798 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-03 06:00:23.871811 | orchestrator | Tuesday 03 February 2026 05:59:46 +0000 (0:00:02.733) 0:04:59.949 ****** 2026-02-03 06:00:23.871823 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-03 06:00:23.871849 | orchestrator | 2026-02-03 06:00:23.871861 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-03 06:00:23.871872 | orchestrator | Tuesday 03 February 2026 05:59:49 +0000 (0:00:03.057) 0:05:03.007 ****** 2026-02-03 06:00:23.871883 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-03 06:00:23.871894 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-03 06:00:23.871905 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-03 06:00:23.871916 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-03 06:00:23.871947 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-03 06:00:23.871959 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-03 06:00:23.871971 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-03 06:00:23.871982 | orchestrator | 2026-02-03 06:00:23.872001 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-03 06:00:23.872013 | orchestrator | Tuesday 03 February 2026 05:59:52 +0000 (0:00:02.184) 0:05:05.191 ****** 2026-02-03 06:00:23.872024 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:23.872035 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:00:23.872046 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:00:23.872056 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:00:23.872067 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:00:23.872078 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:00:23.872089 | orchestrator | skipping: [testbed-manager] 2026-02-03 06:00:23.872100 | orchestrator | 2026-02-03 06:00:23.872111 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-03 06:00:23.872122 | orchestrator | Tuesday 03 February 2026 05:59:54 +0000 (0:00:02.551) 0:05:07.742 ****** 2026-02-03 06:00:23.872132 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:23.872143 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:00:23.872154 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:00:23.872165 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:00:23.872176 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:00:23.872187 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:00:23.872197 | orchestrator | skipping: [testbed-manager] 2026-02-03 06:00:23.872288 | orchestrator | 2026-02-03 06:00:23.872309 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-03 06:00:23.872329 | orchestrator | Tuesday 03 February 2026 05:59:56 +0000 (0:00:02.348) 0:05:10.091 ****** 2026-02-03 06:00:23.872348 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:23.872368 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:00:23.872388 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:00:23.872406 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:00:23.872417 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:00:23.872428 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:00:23.872439 | orchestrator | ok: [testbed-manager] 2026-02-03 06:00:23.872450 | orchestrator | 2026-02-03 06:00:23.872461 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-03 06:00:23.872472 | orchestrator | Tuesday 03 February 2026 05:59:59 +0000 (0:00:02.907) 0:05:12.999 ****** 2026-02-03 06:00:23.872483 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:23.872494 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:00:23.872504 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:00:23.872515 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:00:23.872526 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:00:23.872537 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:00:23.872548 | orchestrator | skipping: [testbed-manager] 2026-02-03 06:00:23.872568 | orchestrator | 2026-02-03 06:00:23.872579 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-03 06:00:23.872590 | orchestrator | Tuesday 03 February 2026 06:00:02 +0000 (0:00:02.699) 0:05:15.699 ****** 2026-02-03 06:00:23.872601 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:23.872612 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:00:23.872622 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:00:23.872633 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:00:23.872644 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:00:23.872654 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:00:23.872665 | orchestrator | skipping: [testbed-manager] 2026-02-03 06:00:23.872676 | orchestrator | 2026-02-03 06:00:23.872687 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-03 06:00:23.872698 | orchestrator | Tuesday 03 February 2026 06:00:05 +0000 (0:00:02.861) 0:05:18.560 ****** 2026-02-03 06:00:23.872709 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:23.872720 | orchestrator | 2026-02-03 06:00:23.872731 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-03 06:00:23.872741 | orchestrator | Tuesday 03 February 2026 06:00:08 +0000 (0:00:02.889) 0:05:21.449 ****** 2026-02-03 06:00:23.872752 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:23.872763 | orchestrator | 2026-02-03 06:00:23.872774 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-03 06:00:23.872785 | orchestrator | 2026-02-03 06:00:23.872796 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:00:23.872807 | orchestrator | Tuesday 03 February 2026 06:00:10 +0000 (0:00:02.200) 0:05:23.650 ****** 2026-02-03 06:00:23.872818 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:23.872829 | orchestrator | 2026-02-03 06:00:23.872840 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:00:23.872851 | orchestrator | Tuesday 03 February 2026 06:00:11 +0000 (0:00:01.495) 0:05:25.146 ****** 2026-02-03 06:00:23.872868 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:23.872886 | orchestrator | 2026-02-03 06:00:23.872905 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-03 06:00:23.872924 | orchestrator | Tuesday 03 February 2026 06:00:13 +0000 (0:00:01.217) 0:05:26.364 ****** 2026-02-03 06:00:23.872945 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-03 06:00:23.872977 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-03 06:00:52.600907 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-03 06:00:52.601039 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-03 06:00:52.601069 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-03 06:00:52.601121 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}])  2026-02-03 06:00:52.601148 | orchestrator | 2026-02-03 06:00:52.601168 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-03 06:00:52.601182 | orchestrator | 2026-02-03 06:00:52.601193 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-03 06:00:52.601204 | orchestrator | Tuesday 03 February 2026 06:00:23 +0000 (0:00:10.678) 0:05:37.042 ****** 2026-02-03 06:00:52.601273 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.601286 | orchestrator | 2026-02-03 06:00:52.601302 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-03 06:00:52.601321 | orchestrator | Tuesday 03 February 2026 06:00:25 +0000 (0:00:01.639) 0:05:38.682 ****** 2026-02-03 06:00:52.601338 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.601357 | orchestrator | 2026-02-03 06:00:52.601375 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-03 06:00:52.601394 | orchestrator | Tuesday 03 February 2026 06:00:26 +0000 (0:00:01.190) 0:05:39.872 ****** 2026-02-03 06:00:52.601412 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:52.601433 | orchestrator | 2026-02-03 06:00:52.601451 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-03 06:00:52.601469 | orchestrator | Tuesday 03 February 2026 06:00:27 +0000 (0:00:01.175) 0:05:41.047 ****** 2026-02-03 06:00:52.601487 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.601505 | orchestrator | 2026-02-03 06:00:52.601523 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:00:52.601543 | orchestrator | Tuesday 03 February 2026 06:00:29 +0000 (0:00:01.208) 0:05:42.256 ****** 2026-02-03 06:00:52.601562 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-03 06:00:52.601580 | orchestrator | 2026-02-03 06:00:52.601599 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:00:52.601618 | orchestrator | Tuesday 03 February 2026 06:00:30 +0000 (0:00:01.206) 0:05:43.463 ****** 2026-02-03 06:00:52.601637 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.601655 | orchestrator | 2026-02-03 06:00:52.601674 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:00:52.601693 | orchestrator | Tuesday 03 February 2026 06:00:31 +0000 (0:00:01.505) 0:05:44.968 ****** 2026-02-03 06:00:52.601712 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.601731 | orchestrator | 2026-02-03 06:00:52.601750 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:00:52.601769 | orchestrator | Tuesday 03 February 2026 06:00:33 +0000 (0:00:01.223) 0:05:46.191 ****** 2026-02-03 06:00:52.601788 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.601807 | orchestrator | 2026-02-03 06:00:52.601825 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:00:52.601844 | orchestrator | Tuesday 03 February 2026 06:00:34 +0000 (0:00:01.548) 0:05:47.740 ****** 2026-02-03 06:00:52.601863 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.601882 | orchestrator | 2026-02-03 06:00:52.601901 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:00:52.601920 | orchestrator | Tuesday 03 February 2026 06:00:35 +0000 (0:00:01.299) 0:05:49.039 ****** 2026-02-03 06:00:52.601939 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.601958 | orchestrator | 2026-02-03 06:00:52.601991 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:00:52.602012 | orchestrator | Tuesday 03 February 2026 06:00:37 +0000 (0:00:01.322) 0:05:50.362 ****** 2026-02-03 06:00:52.602177 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.602197 | orchestrator | 2026-02-03 06:00:52.602242 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:00:52.602261 | orchestrator | Tuesday 03 February 2026 06:00:38 +0000 (0:00:01.238) 0:05:51.601 ****** 2026-02-03 06:00:52.602279 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:52.602297 | orchestrator | 2026-02-03 06:00:52.602345 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:00:52.602365 | orchestrator | Tuesday 03 February 2026 06:00:39 +0000 (0:00:01.179) 0:05:52.780 ****** 2026-02-03 06:00:52.602383 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.602401 | orchestrator | 2026-02-03 06:00:52.602419 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:00:52.602450 | orchestrator | Tuesday 03 February 2026 06:00:40 +0000 (0:00:01.239) 0:05:54.020 ****** 2026-02-03 06:00:52.602470 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:00:52.602489 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:00:52.602507 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:00:52.602525 | orchestrator | 2026-02-03 06:00:52.602542 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:00:52.602559 | orchestrator | Tuesday 03 February 2026 06:00:42 +0000 (0:00:01.890) 0:05:55.910 ****** 2026-02-03 06:00:52.602575 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:00:52.602594 | orchestrator | 2026-02-03 06:00:52.602612 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:00:52.602630 | orchestrator | Tuesday 03 February 2026 06:00:44 +0000 (0:00:01.375) 0:05:57.286 ****** 2026-02-03 06:00:52.602649 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:00:52.602669 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:00:52.602688 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:00:52.602706 | orchestrator | 2026-02-03 06:00:52.602724 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:00:52.602742 | orchestrator | Tuesday 03 February 2026 06:00:47 +0000 (0:00:03.392) 0:06:00.678 ****** 2026-02-03 06:00:52.602760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 06:00:52.602778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 06:00:52.602796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 06:00:52.602814 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:52.602832 | orchestrator | 2026-02-03 06:00:52.602852 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:00:52.602870 | orchestrator | Tuesday 03 February 2026 06:00:49 +0000 (0:00:01.535) 0:06:02.213 ****** 2026-02-03 06:00:52.602883 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:00:52.602896 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:00:52.602907 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:00:52.602918 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:52.602940 | orchestrator | 2026-02-03 06:00:52.602951 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:00:52.602962 | orchestrator | Tuesday 03 February 2026 06:00:51 +0000 (0:00:02.189) 0:06:04.403 ****** 2026-02-03 06:00:52.602975 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:00:52.602989 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:00:52.603001 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:00:52.603012 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:00:52.603023 | orchestrator | 2026-02-03 06:00:52.603034 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:00:52.603056 | orchestrator | Tuesday 03 February 2026 06:00:52 +0000 (0:00:01.369) 0:06:05.772 ****** 2026-02-03 06:01:14.645048 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f906be70bf4b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:00:44.650134', 'end': '2026-02-03 06:00:44.704034', 'delta': '0:00:00.053900', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f906be70bf4b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:01:14.645135 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9e707d2df2a9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:00:45.267271', 'end': '2026-02-03 06:00:45.311325', 'delta': '0:00:00.044054', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9e707d2df2a9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:01:14.645143 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7edf8d69a692', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:00:46.223253', 'end': '2026-02-03 06:00:46.284088', 'delta': '0:00:00.060835', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7edf8d69a692'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:01:14.645165 | orchestrator | 2026-02-03 06:01:14.645172 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:01:14.645178 | orchestrator | Tuesday 03 February 2026 06:00:53 +0000 (0:00:01.226) 0:06:06.999 ****** 2026-02-03 06:01:14.645183 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:01:14.645190 | orchestrator | 2026-02-03 06:01:14.645195 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:01:14.645200 | orchestrator | Tuesday 03 February 2026 06:00:55 +0000 (0:00:02.038) 0:06:09.038 ****** 2026-02-03 06:01:14.645205 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645247 | orchestrator | 2026-02-03 06:01:14.645257 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:01:14.645265 | orchestrator | Tuesday 03 February 2026 06:00:57 +0000 (0:00:01.415) 0:06:10.453 ****** 2026-02-03 06:01:14.645273 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:01:14.645281 | orchestrator | 2026-02-03 06:01:14.645289 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:01:14.645297 | orchestrator | Tuesday 03 February 2026 06:00:58 +0000 (0:00:01.193) 0:06:11.647 ****** 2026-02-03 06:01:14.645305 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-03 06:01:14.645313 | orchestrator | 2026-02-03 06:01:14.645322 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:01:14.645330 | orchestrator | Tuesday 03 February 2026 06:01:00 +0000 (0:00:02.469) 0:06:14.117 ****** 2026-02-03 06:01:14.645339 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:01:14.645348 | orchestrator | 2026-02-03 06:01:14.645357 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:01:14.645366 | orchestrator | Tuesday 03 February 2026 06:01:02 +0000 (0:00:01.249) 0:06:15.367 ****** 2026-02-03 06:01:14.645372 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645377 | orchestrator | 2026-02-03 06:01:14.645382 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:01:14.645387 | orchestrator | Tuesday 03 February 2026 06:01:03 +0000 (0:00:01.202) 0:06:16.569 ****** 2026-02-03 06:01:14.645392 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645397 | orchestrator | 2026-02-03 06:01:14.645401 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:01:14.645407 | orchestrator | Tuesday 03 February 2026 06:01:04 +0000 (0:00:01.411) 0:06:17.981 ****** 2026-02-03 06:01:14.645415 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645422 | orchestrator | 2026-02-03 06:01:14.645429 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:01:14.645437 | orchestrator | Tuesday 03 February 2026 06:01:06 +0000 (0:00:01.280) 0:06:19.262 ****** 2026-02-03 06:01:14.645445 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645453 | orchestrator | 2026-02-03 06:01:14.645474 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:01:14.645480 | orchestrator | Tuesday 03 February 2026 06:01:07 +0000 (0:00:01.193) 0:06:20.456 ****** 2026-02-03 06:01:14.645485 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645489 | orchestrator | 2026-02-03 06:01:14.645502 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:01:14.645510 | orchestrator | Tuesday 03 February 2026 06:01:08 +0000 (0:00:01.211) 0:06:21.668 ****** 2026-02-03 06:01:14.645517 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645525 | orchestrator | 2026-02-03 06:01:14.645532 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:01:14.645538 | orchestrator | Tuesday 03 February 2026 06:01:09 +0000 (0:00:01.172) 0:06:22.840 ****** 2026-02-03 06:01:14.645545 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645553 | orchestrator | 2026-02-03 06:01:14.645561 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:01:14.645577 | orchestrator | Tuesday 03 February 2026 06:01:10 +0000 (0:00:01.225) 0:06:24.065 ****** 2026-02-03 06:01:14.645585 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645593 | orchestrator | 2026-02-03 06:01:14.645601 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:01:14.645608 | orchestrator | Tuesday 03 February 2026 06:01:12 +0000 (0:00:01.211) 0:06:25.277 ****** 2026-02-03 06:01:14.645614 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:14.645620 | orchestrator | 2026-02-03 06:01:14.645625 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:01:14.645631 | orchestrator | Tuesday 03 February 2026 06:01:13 +0000 (0:00:01.233) 0:06:26.510 ****** 2026-02-03 06:01:14.645638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:01:14.645646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:01:14.645652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:01:14.645659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:01:14.645666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:01:14.645672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:01:14.645683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:01:16.057155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8b2ebf21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:01:16.057313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:01:16.057334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:01:16.057385 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:01:16.057400 | orchestrator | 2026-02-03 06:01:16.057412 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:01:16.057425 | orchestrator | Tuesday 03 February 2026 06:01:14 +0000 (0:00:01.307) 0:06:27.817 ****** 2026-02-03 06:01:16.057439 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:01:16.057479 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:01:16.057512 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:01:16.057525 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:01:16.057537 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:01:16.057549 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:01:16.057560 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:01:16.057589 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8b2ebf21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:02:09.059966 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:02:09.060080 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:02:09.060099 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:02:09.060114 | orchestrator | 2026-02-03 06:02:09.060127 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:02:09.060140 | orchestrator | Tuesday 03 February 2026 06:01:16 +0000 (0:00:01.417) 0:06:29.234 ****** 2026-02-03 06:02:09.060152 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:02:09.060164 | orchestrator | 2026-02-03 06:02:09.060176 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:02:09.060305 | orchestrator | Tuesday 03 February 2026 06:01:17 +0000 (0:00:01.544) 0:06:30.779 ****** 2026-02-03 06:02:09.060320 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:02:09.060332 | orchestrator | 2026-02-03 06:02:09.060343 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:02:09.060354 | orchestrator | Tuesday 03 February 2026 06:01:18 +0000 (0:00:01.148) 0:06:31.927 ****** 2026-02-03 06:02:09.060365 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:02:09.060376 | orchestrator | 2026-02-03 06:02:09.060387 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:02:09.060399 | orchestrator | Tuesday 03 February 2026 06:01:20 +0000 (0:00:01.519) 0:06:33.446 ****** 2026-02-03 06:02:09.060409 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:02:09.060421 | orchestrator | 2026-02-03 06:02:09.060432 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:02:09.060443 | orchestrator | Tuesday 03 February 2026 06:01:21 +0000 (0:00:01.180) 0:06:34.627 ****** 2026-02-03 06:02:09.060454 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:02:09.060468 | orchestrator | 2026-02-03 06:02:09.060498 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:02:09.060512 | orchestrator | Tuesday 03 February 2026 06:01:22 +0000 (0:00:01.289) 0:06:35.917 ****** 2026-02-03 06:02:09.060528 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:02:09.060549 | orchestrator | 2026-02-03 06:02:09.060569 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:02:09.060590 | orchestrator | Tuesday 03 February 2026 06:01:23 +0000 (0:00:01.192) 0:06:37.109 ****** 2026-02-03 06:02:09.060611 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:02:09.060629 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-03 06:02:09.060642 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-03 06:02:09.060657 | orchestrator | 2026-02-03 06:02:09.060671 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:02:09.060686 | orchestrator | Tuesday 03 February 2026 06:01:25 +0000 (0:00:02.028) 0:06:39.138 ****** 2026-02-03 06:02:09.060700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 06:02:09.060711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 06:02:09.060722 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 06:02:09.060733 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:02:09.060744 | orchestrator | 2026-02-03 06:02:09.060755 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:02:09.060766 | orchestrator | Tuesday 03 February 2026 06:01:27 +0000 (0:00:01.222) 0:06:40.361 ****** 2026-02-03 06:02:09.060777 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:02:09.060788 | orchestrator | 2026-02-03 06:02:09.060799 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:02:09.060810 | orchestrator | Tuesday 03 February 2026 06:01:28 +0000 (0:00:01.140) 0:06:41.502 ****** 2026-02-03 06:02:09.060820 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:02:09.060832 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:02:09.060843 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:02:09.060854 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:02:09.060865 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:02:09.060876 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:02:09.060905 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:02:09.060917 | orchestrator | 2026-02-03 06:02:09.060928 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:02:09.060939 | orchestrator | Tuesday 03 February 2026 06:01:30 +0000 (0:00:02.300) 0:06:43.803 ****** 2026-02-03 06:02:09.060960 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:02:09.060972 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:02:09.060983 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:02:09.060994 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:02:09.061005 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:02:09.061015 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:02:09.061026 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:02:09.061037 | orchestrator | 2026-02-03 06:02:09.061048 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-03 06:02:09.061058 | orchestrator | Tuesday 03 February 2026 06:01:33 +0000 (0:00:03.167) 0:06:46.970 ****** 2026-02-03 06:02:09.061069 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-03 06:02:09.061080 | orchestrator | 2026-02-03 06:02:09.061091 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-03 06:02:09.061101 | orchestrator | Tuesday 03 February 2026 06:01:36 +0000 (0:00:02.444) 0:06:49.414 ****** 2026-02-03 06:02:09.061112 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:02:09.061123 | orchestrator | 2026-02-03 06:02:09.061134 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-03 06:02:09.061145 | orchestrator | Tuesday 03 February 2026 06:01:37 +0000 (0:00:01.300) 0:06:50.715 ****** 2026-02-03 06:02:09.061156 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:02:09.061167 | orchestrator | 2026-02-03 06:02:09.061177 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-03 06:02:09.061188 | orchestrator | Tuesday 03 February 2026 06:01:38 +0000 (0:00:01.194) 0:06:51.909 ****** 2026-02-03 06:02:09.061199 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-03 06:02:09.061210 | orchestrator | 2026-02-03 06:02:09.061249 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-03 06:02:09.061261 | orchestrator | Tuesday 03 February 2026 06:01:41 +0000 (0:00:02.424) 0:06:54.333 ****** 2026-02-03 06:02:09.061272 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:02:09.061283 | orchestrator | 2026-02-03 06:02:09.061293 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-03 06:02:09.061304 | orchestrator | Tuesday 03 February 2026 06:01:42 +0000 (0:00:01.266) 0:06:55.600 ****** 2026-02-03 06:02:09.061315 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:02:09.061326 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:02:09.061337 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:02:09.061348 | orchestrator | 2026-02-03 06:02:09.061365 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-03 06:02:09.061376 | orchestrator | Tuesday 03 February 2026 06:01:45 +0000 (0:00:02.635) 0:06:58.236 ****** 2026-02-03 06:02:09.061387 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-03 06:02:09.061397 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-03 06:02:09.061409 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-03 06:02:09.061420 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-03 06:02:09.061431 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-03 06:02:09.061442 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-03 06:02:09.061460 | orchestrator | 2026-02-03 06:02:09.061471 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-03 06:02:09.061482 | orchestrator | Tuesday 03 February 2026 06:01:58 +0000 (0:00:13.744) 0:07:11.980 ****** 2026-02-03 06:02:09.061492 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:02:09.061503 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:02:09.061514 | orchestrator | 2026-02-03 06:02:09.061525 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-03 06:02:09.061535 | orchestrator | Tuesday 03 February 2026 06:02:02 +0000 (0:00:04.072) 0:07:16.053 ****** 2026-02-03 06:02:09.061546 | orchestrator | changed: [testbed-node-0] 2026-02-03 06:02:09.061557 | orchestrator | 2026-02-03 06:02:09.061567 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:02:09.061578 | orchestrator | Tuesday 03 February 2026 06:02:05 +0000 (0:00:02.808) 0:07:18.862 ****** 2026-02-03 06:02:09.061589 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-03 06:02:09.061600 | orchestrator | 2026-02-03 06:02:09.061614 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:02:09.061633 | orchestrator | Tuesday 03 February 2026 06:02:07 +0000 (0:00:01.625) 0:07:20.488 ****** 2026-02-03 06:02:09.061651 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-03 06:02:09.061670 | orchestrator | 2026-02-03 06:02:09.061697 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:03:03.542406 | orchestrator | Tuesday 03 February 2026 06:02:09 +0000 (0:00:01.744) 0:07:22.232 ****** 2026-02-03 06:03:03.542506 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.542515 | orchestrator | 2026-02-03 06:03:03.542523 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:03:03.542529 | orchestrator | Tuesday 03 February 2026 06:02:10 +0000 (0:00:01.669) 0:07:23.902 ****** 2026-02-03 06:03:03.542536 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542543 | orchestrator | 2026-02-03 06:03:03.542550 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:03:03.542557 | orchestrator | Tuesday 03 February 2026 06:02:11 +0000 (0:00:01.204) 0:07:25.107 ****** 2026-02-03 06:03:03.542563 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542570 | orchestrator | 2026-02-03 06:03:03.542576 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:03:03.542583 | orchestrator | Tuesday 03 February 2026 06:02:13 +0000 (0:00:01.252) 0:07:26.360 ****** 2026-02-03 06:03:03.542589 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542595 | orchestrator | 2026-02-03 06:03:03.542601 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:03:03.542607 | orchestrator | Tuesday 03 February 2026 06:02:14 +0000 (0:00:01.344) 0:07:27.705 ****** 2026-02-03 06:03:03.542614 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.542620 | orchestrator | 2026-02-03 06:03:03.542626 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:03:03.542632 | orchestrator | Tuesday 03 February 2026 06:02:16 +0000 (0:00:01.816) 0:07:29.521 ****** 2026-02-03 06:03:03.542639 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542645 | orchestrator | 2026-02-03 06:03:03.542651 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:03:03.542658 | orchestrator | Tuesday 03 February 2026 06:02:17 +0000 (0:00:01.225) 0:07:30.747 ****** 2026-02-03 06:03:03.542664 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542671 | orchestrator | 2026-02-03 06:03:03.542676 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:03:03.542683 | orchestrator | Tuesday 03 February 2026 06:02:18 +0000 (0:00:01.263) 0:07:32.011 ****** 2026-02-03 06:03:03.542689 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.542694 | orchestrator | 2026-02-03 06:03:03.542701 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:03:03.542733 | orchestrator | Tuesday 03 February 2026 06:02:20 +0000 (0:00:01.664) 0:07:33.675 ****** 2026-02-03 06:03:03.542740 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.542747 | orchestrator | 2026-02-03 06:03:03.542753 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:03:03.542759 | orchestrator | Tuesday 03 February 2026 06:02:22 +0000 (0:00:01.634) 0:07:35.310 ****** 2026-02-03 06:03:03.542766 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542772 | orchestrator | 2026-02-03 06:03:03.542779 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:03:03.542785 | orchestrator | Tuesday 03 February 2026 06:02:23 +0000 (0:00:01.199) 0:07:36.510 ****** 2026-02-03 06:03:03.542790 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.542796 | orchestrator | 2026-02-03 06:03:03.542802 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:03:03.542809 | orchestrator | Tuesday 03 February 2026 06:02:24 +0000 (0:00:01.186) 0:07:37.697 ****** 2026-02-03 06:03:03.542828 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542834 | orchestrator | 2026-02-03 06:03:03.542839 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:03:03.542845 | orchestrator | Tuesday 03 February 2026 06:02:25 +0000 (0:00:01.289) 0:07:38.987 ****** 2026-02-03 06:03:03.542851 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542858 | orchestrator | 2026-02-03 06:03:03.542864 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:03:03.542870 | orchestrator | Tuesday 03 February 2026 06:02:26 +0000 (0:00:01.199) 0:07:40.186 ****** 2026-02-03 06:03:03.542875 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542881 | orchestrator | 2026-02-03 06:03:03.542886 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:03:03.542892 | orchestrator | Tuesday 03 February 2026 06:02:28 +0000 (0:00:01.231) 0:07:41.418 ****** 2026-02-03 06:03:03.542897 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542903 | orchestrator | 2026-02-03 06:03:03.542909 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:03:03.542916 | orchestrator | Tuesday 03 February 2026 06:02:29 +0000 (0:00:01.206) 0:07:42.625 ****** 2026-02-03 06:03:03.542924 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.542930 | orchestrator | 2026-02-03 06:03:03.542936 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:03:03.542943 | orchestrator | Tuesday 03 February 2026 06:02:30 +0000 (0:00:01.196) 0:07:43.821 ****** 2026-02-03 06:03:03.542950 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.542956 | orchestrator | 2026-02-03 06:03:03.542962 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:03:03.542970 | orchestrator | Tuesday 03 February 2026 06:02:31 +0000 (0:00:01.280) 0:07:45.102 ****** 2026-02-03 06:03:03.542976 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.542983 | orchestrator | 2026-02-03 06:03:03.542989 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:03:03.542994 | orchestrator | Tuesday 03 February 2026 06:02:33 +0000 (0:00:01.228) 0:07:46.330 ****** 2026-02-03 06:03:03.543002 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.543009 | orchestrator | 2026-02-03 06:03:03.543015 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:03:03.543022 | orchestrator | Tuesday 03 February 2026 06:02:34 +0000 (0:00:01.308) 0:07:47.640 ****** 2026-02-03 06:03:03.543028 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543034 | orchestrator | 2026-02-03 06:03:03.543041 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:03:03.543046 | orchestrator | Tuesday 03 February 2026 06:02:35 +0000 (0:00:01.214) 0:07:48.854 ****** 2026-02-03 06:03:03.543053 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543059 | orchestrator | 2026-02-03 06:03:03.543085 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:03:03.543103 | orchestrator | Tuesday 03 February 2026 06:02:36 +0000 (0:00:01.135) 0:07:49.990 ****** 2026-02-03 06:03:03.543109 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543116 | orchestrator | 2026-02-03 06:03:03.543122 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:03:03.543128 | orchestrator | Tuesday 03 February 2026 06:02:37 +0000 (0:00:01.166) 0:07:51.156 ****** 2026-02-03 06:03:03.543133 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543140 | orchestrator | 2026-02-03 06:03:03.543146 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:03:03.543153 | orchestrator | Tuesday 03 February 2026 06:02:39 +0000 (0:00:01.182) 0:07:52.339 ****** 2026-02-03 06:03:03.543158 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543164 | orchestrator | 2026-02-03 06:03:03.543169 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:03:03.543174 | orchestrator | Tuesday 03 February 2026 06:02:40 +0000 (0:00:01.175) 0:07:53.514 ****** 2026-02-03 06:03:03.543180 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543186 | orchestrator | 2026-02-03 06:03:03.543191 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:03:03.543197 | orchestrator | Tuesday 03 February 2026 06:02:41 +0000 (0:00:01.189) 0:07:54.704 ****** 2026-02-03 06:03:03.543203 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543208 | orchestrator | 2026-02-03 06:03:03.543215 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:03:03.543222 | orchestrator | Tuesday 03 February 2026 06:02:42 +0000 (0:00:01.173) 0:07:55.877 ****** 2026-02-03 06:03:03.543251 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543258 | orchestrator | 2026-02-03 06:03:03.543263 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:03:03.543269 | orchestrator | Tuesday 03 February 2026 06:02:43 +0000 (0:00:01.177) 0:07:57.055 ****** 2026-02-03 06:03:03.543275 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543281 | orchestrator | 2026-02-03 06:03:03.543286 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:03:03.543292 | orchestrator | Tuesday 03 February 2026 06:02:45 +0000 (0:00:01.243) 0:07:58.299 ****** 2026-02-03 06:03:03.543297 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543304 | orchestrator | 2026-02-03 06:03:03.543310 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:03:03.543315 | orchestrator | Tuesday 03 February 2026 06:02:46 +0000 (0:00:01.230) 0:07:59.529 ****** 2026-02-03 06:03:03.543321 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543326 | orchestrator | 2026-02-03 06:03:03.543332 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:03:03.543338 | orchestrator | Tuesday 03 February 2026 06:02:47 +0000 (0:00:01.163) 0:08:00.692 ****** 2026-02-03 06:03:03.543344 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543349 | orchestrator | 2026-02-03 06:03:03.543355 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:03:03.543361 | orchestrator | Tuesday 03 February 2026 06:02:48 +0000 (0:00:01.159) 0:08:01.852 ****** 2026-02-03 06:03:03.543367 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.543372 | orchestrator | 2026-02-03 06:03:03.543385 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:03:03.543391 | orchestrator | Tuesday 03 February 2026 06:02:50 +0000 (0:00:02.177) 0:08:04.030 ****** 2026-02-03 06:03:03.543397 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.543403 | orchestrator | 2026-02-03 06:03:03.543409 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:03:03.543414 | orchestrator | Tuesday 03 February 2026 06:02:53 +0000 (0:00:02.563) 0:08:06.593 ****** 2026-02-03 06:03:03.543421 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-03 06:03:03.543438 | orchestrator | 2026-02-03 06:03:03.543444 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:03:03.543449 | orchestrator | Tuesday 03 February 2026 06:02:54 +0000 (0:00:01.577) 0:08:08.171 ****** 2026-02-03 06:03:03.543455 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543465 | orchestrator | 2026-02-03 06:03:03.543470 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:03:03.543477 | orchestrator | Tuesday 03 February 2026 06:02:56 +0000 (0:00:01.289) 0:08:09.460 ****** 2026-02-03 06:03:03.543482 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543487 | orchestrator | 2026-02-03 06:03:03.543493 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:03:03.543499 | orchestrator | Tuesday 03 February 2026 06:02:57 +0000 (0:00:01.199) 0:08:10.659 ****** 2026-02-03 06:03:03.543504 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:03:03.543510 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:03:03.543516 | orchestrator | 2026-02-03 06:03:03.543522 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:03:03.543528 | orchestrator | Tuesday 03 February 2026 06:02:59 +0000 (0:00:01.899) 0:08:12.559 ****** 2026-02-03 06:03:03.543534 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:03.543540 | orchestrator | 2026-02-03 06:03:03.543545 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:03:03.543552 | orchestrator | Tuesday 03 February 2026 06:03:01 +0000 (0:00:01.773) 0:08:14.332 ****** 2026-02-03 06:03:03.543558 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543564 | orchestrator | 2026-02-03 06:03:03.543570 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:03:03.543576 | orchestrator | Tuesday 03 February 2026 06:03:02 +0000 (0:00:01.182) 0:08:15.515 ****** 2026-02-03 06:03:03.543582 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:03.543589 | orchestrator | 2026-02-03 06:03:03.543595 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:03:03.543611 | orchestrator | Tuesday 03 February 2026 06:03:03 +0000 (0:00:01.200) 0:08:16.716 ****** 2026-02-03 06:03:54.939617 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.939722 | orchestrator | 2026-02-03 06:03:54.939737 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:03:54.939747 | orchestrator | Tuesday 03 February 2026 06:03:04 +0000 (0:00:01.225) 0:08:17.941 ****** 2026-02-03 06:03:54.939756 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-03 06:03:54.939767 | orchestrator | 2026-02-03 06:03:54.939776 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:03:54.939785 | orchestrator | Tuesday 03 February 2026 06:03:06 +0000 (0:00:01.747) 0:08:19.689 ****** 2026-02-03 06:03:54.939793 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:54.939803 | orchestrator | 2026-02-03 06:03:54.939813 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:03:54.939822 | orchestrator | Tuesday 03 February 2026 06:03:08 +0000 (0:00:01.737) 0:08:21.427 ****** 2026-02-03 06:03:54.939831 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:03:54.939840 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:03:54.939848 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:03:54.939857 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.939866 | orchestrator | 2026-02-03 06:03:54.939875 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:03:54.939883 | orchestrator | Tuesday 03 February 2026 06:03:09 +0000 (0:00:01.228) 0:08:22.655 ****** 2026-02-03 06:03:54.939892 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.939901 | orchestrator | 2026-02-03 06:03:54.939931 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:03:54.939940 | orchestrator | Tuesday 03 February 2026 06:03:10 +0000 (0:00:01.189) 0:08:23.845 ****** 2026-02-03 06:03:54.939949 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.939958 | orchestrator | 2026-02-03 06:03:54.939967 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:03:54.939975 | orchestrator | Tuesday 03 February 2026 06:03:11 +0000 (0:00:01.320) 0:08:25.166 ****** 2026-02-03 06:03:54.939984 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.939993 | orchestrator | 2026-02-03 06:03:54.940001 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:03:54.940011 | orchestrator | Tuesday 03 February 2026 06:03:13 +0000 (0:00:01.211) 0:08:26.378 ****** 2026-02-03 06:03:54.940019 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940028 | orchestrator | 2026-02-03 06:03:54.940037 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:03:54.940045 | orchestrator | Tuesday 03 February 2026 06:03:14 +0000 (0:00:01.402) 0:08:27.780 ****** 2026-02-03 06:03:54.940054 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940063 | orchestrator | 2026-02-03 06:03:54.940071 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:03:54.940080 | orchestrator | Tuesday 03 February 2026 06:03:16 +0000 (0:00:01.610) 0:08:29.391 ****** 2026-02-03 06:03:54.940088 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:54.940097 | orchestrator | 2026-02-03 06:03:54.940106 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:03:54.940115 | orchestrator | Tuesday 03 February 2026 06:03:19 +0000 (0:00:02.850) 0:08:32.241 ****** 2026-02-03 06:03:54.940124 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:54.940132 | orchestrator | 2026-02-03 06:03:54.940141 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:03:54.940149 | orchestrator | Tuesday 03 February 2026 06:03:20 +0000 (0:00:01.170) 0:08:33.412 ****** 2026-02-03 06:03:54.940160 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-03 06:03:54.940170 | orchestrator | 2026-02-03 06:03:54.940180 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:03:54.940191 | orchestrator | Tuesday 03 February 2026 06:03:21 +0000 (0:00:01.606) 0:08:35.018 ****** 2026-02-03 06:03:54.940201 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940211 | orchestrator | 2026-02-03 06:03:54.940221 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:03:54.940256 | orchestrator | Tuesday 03 February 2026 06:03:23 +0000 (0:00:01.222) 0:08:36.241 ****** 2026-02-03 06:03:54.940267 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940278 | orchestrator | 2026-02-03 06:03:54.940288 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:03:54.940299 | orchestrator | Tuesday 03 February 2026 06:03:24 +0000 (0:00:01.251) 0:08:37.493 ****** 2026-02-03 06:03:54.940308 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940319 | orchestrator | 2026-02-03 06:03:54.940329 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:03:54.940339 | orchestrator | Tuesday 03 February 2026 06:03:25 +0000 (0:00:01.263) 0:08:38.756 ****** 2026-02-03 06:03:54.940348 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940357 | orchestrator | 2026-02-03 06:03:54.940365 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:03:54.940374 | orchestrator | Tuesday 03 February 2026 06:03:26 +0000 (0:00:01.191) 0:08:39.947 ****** 2026-02-03 06:03:54.940383 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940391 | orchestrator | 2026-02-03 06:03:54.940400 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:03:54.940409 | orchestrator | Tuesday 03 February 2026 06:03:27 +0000 (0:00:01.227) 0:08:41.175 ****** 2026-02-03 06:03:54.940417 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940433 | orchestrator | 2026-02-03 06:03:54.940442 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:03:54.940451 | orchestrator | Tuesday 03 February 2026 06:03:29 +0000 (0:00:01.261) 0:08:42.436 ****** 2026-02-03 06:03:54.940460 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940468 | orchestrator | 2026-02-03 06:03:54.940493 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:03:54.940502 | orchestrator | Tuesday 03 February 2026 06:03:30 +0000 (0:00:01.242) 0:08:43.679 ****** 2026-02-03 06:03:54.940511 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940520 | orchestrator | 2026-02-03 06:03:54.940528 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:03:54.940538 | orchestrator | Tuesday 03 February 2026 06:03:31 +0000 (0:00:01.238) 0:08:44.918 ****** 2026-02-03 06:03:54.940546 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:03:54.940555 | orchestrator | 2026-02-03 06:03:54.940563 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:03:54.940611 | orchestrator | Tuesday 03 February 2026 06:03:32 +0000 (0:00:01.219) 0:08:46.137 ****** 2026-02-03 06:03:54.940621 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-03 06:03:54.940630 | orchestrator | 2026-02-03 06:03:54.940645 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:03:54.940660 | orchestrator | Tuesday 03 February 2026 06:03:34 +0000 (0:00:01.564) 0:08:47.702 ****** 2026-02-03 06:03:54.940669 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-03 06:03:54.940679 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-03 06:03:54.940687 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-03 06:03:54.940696 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-03 06:03:54.940704 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-03 06:03:54.940712 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-03 06:03:54.940721 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-03 06:03:54.940729 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:03:54.940738 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:03:54.940747 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:03:54.940755 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:03:54.940764 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:03:54.940772 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:03:54.940781 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:03:54.940789 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-03 06:03:54.940798 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-03 06:03:54.940807 | orchestrator | 2026-02-03 06:03:54.940815 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:03:54.940824 | orchestrator | Tuesday 03 February 2026 06:03:41 +0000 (0:00:07.122) 0:08:54.824 ****** 2026-02-03 06:03:54.940832 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940841 | orchestrator | 2026-02-03 06:03:54.940849 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:03:54.940857 | orchestrator | Tuesday 03 February 2026 06:03:42 +0000 (0:00:01.227) 0:08:56.052 ****** 2026-02-03 06:03:54.940866 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940875 | orchestrator | 2026-02-03 06:03:54.940888 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:03:54.940897 | orchestrator | Tuesday 03 February 2026 06:03:44 +0000 (0:00:01.231) 0:08:57.284 ****** 2026-02-03 06:03:54.940905 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940914 | orchestrator | 2026-02-03 06:03:54.940923 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:03:54.940937 | orchestrator | Tuesday 03 February 2026 06:03:45 +0000 (0:00:01.297) 0:08:58.582 ****** 2026-02-03 06:03:54.940946 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940954 | orchestrator | 2026-02-03 06:03:54.940963 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:03:54.940971 | orchestrator | Tuesday 03 February 2026 06:03:46 +0000 (0:00:01.191) 0:08:59.774 ****** 2026-02-03 06:03:54.940980 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.940988 | orchestrator | 2026-02-03 06:03:54.940997 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:03:54.941005 | orchestrator | Tuesday 03 February 2026 06:03:47 +0000 (0:00:01.196) 0:09:00.970 ****** 2026-02-03 06:03:54.941014 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.941022 | orchestrator | 2026-02-03 06:03:54.941031 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:03:54.941040 | orchestrator | Tuesday 03 February 2026 06:03:48 +0000 (0:00:01.137) 0:09:02.108 ****** 2026-02-03 06:03:54.941048 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.941057 | orchestrator | 2026-02-03 06:03:54.941065 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:03:54.941074 | orchestrator | Tuesday 03 February 2026 06:03:50 +0000 (0:00:01.143) 0:09:03.251 ****** 2026-02-03 06:03:54.941083 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.941091 | orchestrator | 2026-02-03 06:03:54.941099 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:03:54.941108 | orchestrator | Tuesday 03 February 2026 06:03:51 +0000 (0:00:01.254) 0:09:04.506 ****** 2026-02-03 06:03:54.941117 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.941125 | orchestrator | 2026-02-03 06:03:54.941134 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:03:54.941142 | orchestrator | Tuesday 03 February 2026 06:03:52 +0000 (0:00:01.171) 0:09:05.677 ****** 2026-02-03 06:03:54.941151 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.941159 | orchestrator | 2026-02-03 06:03:54.941168 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:03:54.941176 | orchestrator | Tuesday 03 February 2026 06:03:53 +0000 (0:00:01.187) 0:09:06.865 ****** 2026-02-03 06:03:54.941185 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:03:54.941194 | orchestrator | 2026-02-03 06:03:54.941208 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:04:53.090439 | orchestrator | Tuesday 03 February 2026 06:03:54 +0000 (0:00:01.245) 0:09:08.111 ****** 2026-02-03 06:04:53.090549 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090565 | orchestrator | 2026-02-03 06:04:53.090578 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:04:53.090588 | orchestrator | Tuesday 03 February 2026 06:03:56 +0000 (0:00:01.225) 0:09:09.336 ****** 2026-02-03 06:04:53.090598 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090608 | orchestrator | 2026-02-03 06:04:53.090618 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:04:53.090628 | orchestrator | Tuesday 03 February 2026 06:03:57 +0000 (0:00:01.294) 0:09:10.631 ****** 2026-02-03 06:04:53.090638 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090647 | orchestrator | 2026-02-03 06:04:53.090657 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:04:53.090667 | orchestrator | Tuesday 03 February 2026 06:03:58 +0000 (0:00:01.196) 0:09:11.828 ****** 2026-02-03 06:04:53.090677 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090687 | orchestrator | 2026-02-03 06:04:53.090697 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:04:53.090707 | orchestrator | Tuesday 03 February 2026 06:03:59 +0000 (0:00:01.271) 0:09:13.099 ****** 2026-02-03 06:04:53.090743 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090753 | orchestrator | 2026-02-03 06:04:53.090763 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:04:53.090773 | orchestrator | Tuesday 03 February 2026 06:04:01 +0000 (0:00:01.245) 0:09:14.344 ****** 2026-02-03 06:04:53.090782 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090792 | orchestrator | 2026-02-03 06:04:53.090802 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:04:53.090812 | orchestrator | Tuesday 03 February 2026 06:04:02 +0000 (0:00:01.345) 0:09:15.690 ****** 2026-02-03 06:04:53.090822 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090832 | orchestrator | 2026-02-03 06:04:53.090841 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:04:53.090851 | orchestrator | Tuesday 03 February 2026 06:04:03 +0000 (0:00:01.165) 0:09:16.856 ****** 2026-02-03 06:04:53.090860 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090870 | orchestrator | 2026-02-03 06:04:53.090879 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:04:53.090889 | orchestrator | Tuesday 03 February 2026 06:04:04 +0000 (0:00:01.173) 0:09:18.029 ****** 2026-02-03 06:04:53.090898 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090908 | orchestrator | 2026-02-03 06:04:53.090917 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:04:53.090927 | orchestrator | Tuesday 03 February 2026 06:04:06 +0000 (0:00:01.237) 0:09:19.266 ****** 2026-02-03 06:04:53.090936 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.090946 | orchestrator | 2026-02-03 06:04:53.090955 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:04:53.090979 | orchestrator | Tuesday 03 February 2026 06:04:07 +0000 (0:00:01.457) 0:09:20.725 ****** 2026-02-03 06:04:53.090991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 06:04:53.091003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:04:53.091014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:04:53.091025 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.091037 | orchestrator | 2026-02-03 06:04:53.091048 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:04:53.091060 | orchestrator | Tuesday 03 February 2026 06:04:09 +0000 (0:00:01.844) 0:09:22.569 ****** 2026-02-03 06:04:53.091071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 06:04:53.091083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:04:53.091094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:04:53.091106 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.091118 | orchestrator | 2026-02-03 06:04:53.091130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:04:53.091141 | orchestrator | Tuesday 03 February 2026 06:04:10 +0000 (0:00:01.458) 0:09:24.027 ****** 2026-02-03 06:04:53.091152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 06:04:53.091163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:04:53.091174 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:04:53.091186 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.091197 | orchestrator | 2026-02-03 06:04:53.091209 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:04:53.091221 | orchestrator | Tuesday 03 February 2026 06:04:12 +0000 (0:00:01.535) 0:09:25.563 ****** 2026-02-03 06:04:53.091232 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.091269 | orchestrator | 2026-02-03 06:04:53.091286 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:04:53.091305 | orchestrator | Tuesday 03 February 2026 06:04:13 +0000 (0:00:01.196) 0:09:26.759 ****** 2026-02-03 06:04:53.091323 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-03 06:04:53.091352 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.091365 | orchestrator | 2026-02-03 06:04:53.091377 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:04:53.091387 | orchestrator | Tuesday 03 February 2026 06:04:15 +0000 (0:00:01.447) 0:09:28.206 ****** 2026-02-03 06:04:53.091396 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.091406 | orchestrator | 2026-02-03 06:04:53.091416 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:04:53.091426 | orchestrator | Tuesday 03 February 2026 06:04:16 +0000 (0:00:01.909) 0:09:30.116 ****** 2026-02-03 06:04:53.091435 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.091445 | orchestrator | 2026-02-03 06:04:53.091455 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-03 06:04:53.091481 | orchestrator | Tuesday 03 February 2026 06:04:18 +0000 (0:00:01.243) 0:09:31.359 ****** 2026-02-03 06:04:53.091491 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-03 06:04:53.091502 | orchestrator | 2026-02-03 06:04:53.091512 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-03 06:04:53.091521 | orchestrator | Tuesday 03 February 2026 06:04:19 +0000 (0:00:01.628) 0:09:32.988 ****** 2026-02-03 06:04:53.091531 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-03 06:04:53.091541 | orchestrator | 2026-02-03 06:04:53.091550 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-03 06:04:53.091560 | orchestrator | Tuesday 03 February 2026 06:04:23 +0000 (0:00:03.645) 0:09:36.634 ****** 2026-02-03 06:04:53.091569 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.091579 | orchestrator | 2026-02-03 06:04:53.091589 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-03 06:04:53.091599 | orchestrator | Tuesday 03 February 2026 06:04:24 +0000 (0:00:01.241) 0:09:37.875 ****** 2026-02-03 06:04:53.091608 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.091618 | orchestrator | 2026-02-03 06:04:53.091628 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-03 06:04:53.091637 | orchestrator | Tuesday 03 February 2026 06:04:25 +0000 (0:00:01.258) 0:09:39.134 ****** 2026-02-03 06:04:53.091647 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.091656 | orchestrator | 2026-02-03 06:04:53.091666 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-03 06:04:53.091676 | orchestrator | Tuesday 03 February 2026 06:04:27 +0000 (0:00:01.272) 0:09:40.406 ****** 2026-02-03 06:04:53.091686 | orchestrator | changed: [testbed-node-0] 2026-02-03 06:04:53.091695 | orchestrator | 2026-02-03 06:04:53.091705 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-03 06:04:53.091714 | orchestrator | Tuesday 03 February 2026 06:04:29 +0000 (0:00:02.090) 0:09:42.497 ****** 2026-02-03 06:04:53.091724 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.091734 | orchestrator | 2026-02-03 06:04:53.091743 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-03 06:04:53.091753 | orchestrator | Tuesday 03 February 2026 06:04:30 +0000 (0:00:01.637) 0:09:44.135 ****** 2026-02-03 06:04:53.091763 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.091772 | orchestrator | 2026-02-03 06:04:53.091782 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-03 06:04:53.091791 | orchestrator | Tuesday 03 February 2026 06:04:32 +0000 (0:00:01.549) 0:09:45.685 ****** 2026-02-03 06:04:53.091801 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.091811 | orchestrator | 2026-02-03 06:04:53.091820 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-03 06:04:53.091830 | orchestrator | Tuesday 03 February 2026 06:04:34 +0000 (0:00:01.530) 0:09:47.216 ****** 2026-02-03 06:04:53.091840 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.091849 | orchestrator | 2026-02-03 06:04:53.091859 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-03 06:04:53.091875 | orchestrator | Tuesday 03 February 2026 06:04:35 +0000 (0:00:01.833) 0:09:49.050 ****** 2026-02-03 06:04:53.091890 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.091900 | orchestrator | 2026-02-03 06:04:53.091910 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-03 06:04:53.091919 | orchestrator | Tuesday 03 February 2026 06:04:37 +0000 (0:00:01.954) 0:09:51.004 ****** 2026-02-03 06:04:53.091929 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-03 06:04:53.091939 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 06:04:53.091948 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 06:04:53.091958 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-03 06:04:53.091968 | orchestrator | 2026-02-03 06:04:53.091977 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-03 06:04:53.091987 | orchestrator | Tuesday 03 February 2026 06:04:41 +0000 (0:00:04.031) 0:09:55.035 ****** 2026-02-03 06:04:53.091997 | orchestrator | changed: [testbed-node-0] 2026-02-03 06:04:53.092007 | orchestrator | 2026-02-03 06:04:53.092016 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-03 06:04:53.092026 | orchestrator | Tuesday 03 February 2026 06:04:43 +0000 (0:00:02.126) 0:09:57.162 ****** 2026-02-03 06:04:53.092036 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.092045 | orchestrator | 2026-02-03 06:04:53.092055 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-03 06:04:53.092065 | orchestrator | Tuesday 03 February 2026 06:04:45 +0000 (0:00:01.238) 0:09:58.400 ****** 2026-02-03 06:04:53.092075 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.092084 | orchestrator | 2026-02-03 06:04:53.092094 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-03 06:04:53.092103 | orchestrator | Tuesday 03 February 2026 06:04:46 +0000 (0:00:01.227) 0:09:59.627 ****** 2026-02-03 06:04:53.092113 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.092123 | orchestrator | 2026-02-03 06:04:53.092132 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-03 06:04:53.092142 | orchestrator | Tuesday 03 February 2026 06:04:48 +0000 (0:00:02.303) 0:10:01.930 ****** 2026-02-03 06:04:53.092152 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:04:53.092161 | orchestrator | 2026-02-03 06:04:53.092171 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-03 06:04:53.092180 | orchestrator | Tuesday 03 February 2026 06:04:50 +0000 (0:00:01.542) 0:10:03.473 ****** 2026-02-03 06:04:53.092190 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:04:53.092200 | orchestrator | 2026-02-03 06:04:53.092210 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-03 06:04:53.092219 | orchestrator | Tuesday 03 February 2026 06:04:51 +0000 (0:00:01.215) 0:10:04.689 ****** 2026-02-03 06:04:53.092229 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-03 06:04:53.092274 | orchestrator | 2026-02-03 06:04:53.092286 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-03 06:04:53.092303 | orchestrator | Tuesday 03 February 2026 06:04:53 +0000 (0:00:01.575) 0:10:06.264 ****** 2026-02-03 06:05:50.346666 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:05:50.346752 | orchestrator | 2026-02-03 06:05:50.346759 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-03 06:05:50.346766 | orchestrator | Tuesday 03 February 2026 06:04:54 +0000 (0:00:01.184) 0:10:07.449 ****** 2026-02-03 06:05:50.346771 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:05:50.346776 | orchestrator | 2026-02-03 06:05:50.346780 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-03 06:05:50.346785 | orchestrator | Tuesday 03 February 2026 06:04:55 +0000 (0:00:01.224) 0:10:08.673 ****** 2026-02-03 06:05:50.346790 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-03 06:05:50.346794 | orchestrator | 2026-02-03 06:05:50.346798 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-03 06:05:50.346821 | orchestrator | Tuesday 03 February 2026 06:04:57 +0000 (0:00:01.523) 0:10:10.196 ****** 2026-02-03 06:05:50.346826 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:05:50.346831 | orchestrator | 2026-02-03 06:05:50.346836 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-03 06:05:50.346840 | orchestrator | Tuesday 03 February 2026 06:04:59 +0000 (0:00:02.385) 0:10:12.582 ****** 2026-02-03 06:05:50.346844 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:05:50.346849 | orchestrator | 2026-02-03 06:05:50.346853 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-03 06:05:50.346857 | orchestrator | Tuesday 03 February 2026 06:05:01 +0000 (0:00:01.991) 0:10:14.574 ****** 2026-02-03 06:05:50.346862 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:05:50.346866 | orchestrator | 2026-02-03 06:05:50.346871 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-03 06:05:50.346875 | orchestrator | Tuesday 03 February 2026 06:05:05 +0000 (0:00:03.672) 0:10:18.246 ****** 2026-02-03 06:05:50.346880 | orchestrator | changed: [testbed-node-0] 2026-02-03 06:05:50.346884 | orchestrator | 2026-02-03 06:05:50.346889 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-03 06:05:50.346893 | orchestrator | Tuesday 03 February 2026 06:05:08 +0000 (0:00:03.632) 0:10:21.879 ****** 2026-02-03 06:05:50.346898 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-03 06:05:50.346903 | orchestrator | 2026-02-03 06:05:50.346907 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-03 06:05:50.346911 | orchestrator | Tuesday 03 February 2026 06:05:10 +0000 (0:00:01.728) 0:10:23.607 ****** 2026-02-03 06:05:50.346916 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:05:50.346920 | orchestrator | 2026-02-03 06:05:50.346924 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-03 06:05:50.346929 | orchestrator | Tuesday 03 February 2026 06:05:12 +0000 (0:00:02.346) 0:10:25.954 ****** 2026-02-03 06:05:50.346933 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:05:50.346937 | orchestrator | 2026-02-03 06:05:50.346942 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-03 06:05:50.346946 | orchestrator | Tuesday 03 February 2026 06:05:16 +0000 (0:00:03.350) 0:10:29.305 ****** 2026-02-03 06:05:50.346963 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:05:50.346967 | orchestrator | 2026-02-03 06:05:50.346973 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-03 06:05:50.346979 | orchestrator | Tuesday 03 February 2026 06:05:17 +0000 (0:00:01.220) 0:10:30.525 ****** 2026-02-03 06:05:50.346987 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-03 06:05:50.346997 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-03 06:05:50.347004 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-03 06:05:50.347010 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-03 06:05:50.347037 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-03 06:05:50.347045 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}])  2026-02-03 06:05:50.347104 | orchestrator | 2026-02-03 06:05:50.347113 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-03 06:05:50.347119 | orchestrator | Tuesday 03 February 2026 06:05:27 +0000 (0:00:10.343) 0:10:40.869 ****** 2026-02-03 06:05:50.347126 | orchestrator | changed: [testbed-node-0] 2026-02-03 06:05:50.347132 | orchestrator | 2026-02-03 06:05:50.347139 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:05:50.347145 | orchestrator | Tuesday 03 February 2026 06:05:30 +0000 (0:00:02.652) 0:10:43.522 ****** 2026-02-03 06:05:50.347151 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:05:50.347158 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-03 06:05:50.347164 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-03 06:05:50.347170 | orchestrator | 2026-02-03 06:05:50.347177 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:05:50.347183 | orchestrator | Tuesday 03 February 2026 06:05:32 +0000 (0:00:02.343) 0:10:45.866 ****** 2026-02-03 06:05:50.347189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 06:05:50.347196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 06:05:50.347202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 06:05:50.347208 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:05:50.347216 | orchestrator | 2026-02-03 06:05:50.347223 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-03 06:05:50.347231 | orchestrator | Tuesday 03 February 2026 06:05:34 +0000 (0:00:01.498) 0:10:47.364 ****** 2026-02-03 06:05:50.347238 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:05:50.347272 | orchestrator | 2026-02-03 06:05:50.347280 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-03 06:05:50.347287 | orchestrator | Tuesday 03 February 2026 06:05:35 +0000 (0:00:01.241) 0:10:48.605 ****** 2026-02-03 06:05:50.347295 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:05:50.347302 | orchestrator | 2026-02-03 06:05:50.347310 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-03 06:05:50.347317 | orchestrator | 2026-02-03 06:05:50.347324 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-03 06:05:50.347332 | orchestrator | Tuesday 03 February 2026 06:05:37 +0000 (0:00:02.288) 0:10:50.894 ****** 2026-02-03 06:05:50.347344 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:05:50.347352 | orchestrator | 2026-02-03 06:05:50.347360 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-03 06:05:50.347368 | orchestrator | Tuesday 03 February 2026 06:05:38 +0000 (0:00:01.197) 0:10:52.092 ****** 2026-02-03 06:05:50.347375 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:05:50.347382 | orchestrator | 2026-02-03 06:05:50.347390 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-03 06:05:50.347403 | orchestrator | Tuesday 03 February 2026 06:05:39 +0000 (0:00:00.813) 0:10:52.905 ****** 2026-02-03 06:05:50.347411 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:05:50.347419 | orchestrator | 2026-02-03 06:05:50.347426 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-03 06:05:50.347434 | orchestrator | Tuesday 03 February 2026 06:05:40 +0000 (0:00:00.833) 0:10:53.739 ****** 2026-02-03 06:05:50.347441 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:05:50.347448 | orchestrator | 2026-02-03 06:05:50.347455 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:05:50.347463 | orchestrator | Tuesday 03 February 2026 06:05:41 +0000 (0:00:00.788) 0:10:54.527 ****** 2026-02-03 06:05:50.347470 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-03 06:05:50.347478 | orchestrator | 2026-02-03 06:05:50.347485 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:05:50.347493 | orchestrator | Tuesday 03 February 2026 06:05:42 +0000 (0:00:01.136) 0:10:55.664 ****** 2026-02-03 06:05:50.347500 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:05:50.347507 | orchestrator | 2026-02-03 06:05:50.347515 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:05:50.347522 | orchestrator | Tuesday 03 February 2026 06:05:43 +0000 (0:00:01.524) 0:10:57.188 ****** 2026-02-03 06:05:50.347529 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:05:50.347537 | orchestrator | 2026-02-03 06:05:50.347544 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:05:50.347552 | orchestrator | Tuesday 03 February 2026 06:05:45 +0000 (0:00:01.199) 0:10:58.388 ****** 2026-02-03 06:05:50.347560 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:05:50.347567 | orchestrator | 2026-02-03 06:05:50.347575 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:05:50.347581 | orchestrator | Tuesday 03 February 2026 06:05:46 +0000 (0:00:01.561) 0:10:59.949 ****** 2026-02-03 06:05:50.347587 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:05:50.347594 | orchestrator | 2026-02-03 06:05:50.347600 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:05:50.347606 | orchestrator | Tuesday 03 February 2026 06:05:47 +0000 (0:00:01.191) 0:11:01.141 ****** 2026-02-03 06:05:50.347612 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:05:50.347619 | orchestrator | 2026-02-03 06:05:50.347625 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:05:50.347631 | orchestrator | Tuesday 03 February 2026 06:05:49 +0000 (0:00:01.163) 0:11:02.305 ****** 2026-02-03 06:05:50.347643 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:06:15.823431 | orchestrator | 2026-02-03 06:06:15.823549 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:06:15.823566 | orchestrator | Tuesday 03 February 2026 06:05:50 +0000 (0:00:01.214) 0:11:03.520 ****** 2026-02-03 06:06:15.823579 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:15.823591 | orchestrator | 2026-02-03 06:06:15.823603 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:06:15.823614 | orchestrator | Tuesday 03 February 2026 06:05:51 +0000 (0:00:01.287) 0:11:04.808 ****** 2026-02-03 06:06:15.823626 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:06:15.823638 | orchestrator | 2026-02-03 06:06:15.823649 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:06:15.823660 | orchestrator | Tuesday 03 February 2026 06:05:52 +0000 (0:00:01.234) 0:11:06.042 ****** 2026-02-03 06:06:15.823671 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:06:15.823683 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:06:15.823694 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:06:15.823705 | orchestrator | 2026-02-03 06:06:15.823716 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:06:15.823728 | orchestrator | Tuesday 03 February 2026 06:05:54 +0000 (0:00:01.828) 0:11:07.870 ****** 2026-02-03 06:06:15.823765 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:06:15.823777 | orchestrator | 2026-02-03 06:06:15.823789 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:06:15.823800 | orchestrator | Tuesday 03 February 2026 06:05:56 +0000 (0:00:01.403) 0:11:09.274 ****** 2026-02-03 06:06:15.823810 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:06:15.823821 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:06:15.823833 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:06:15.823844 | orchestrator | 2026-02-03 06:06:15.823854 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:06:15.823866 | orchestrator | Tuesday 03 February 2026 06:05:59 +0000 (0:00:03.052) 0:11:12.327 ****** 2026-02-03 06:06:15.823877 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 06:06:15.823891 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 06:06:15.823904 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 06:06:15.823916 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:15.823929 | orchestrator | 2026-02-03 06:06:15.823942 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:06:15.823956 | orchestrator | Tuesday 03 February 2026 06:06:00 +0000 (0:00:01.525) 0:11:13.852 ****** 2026-02-03 06:06:15.823985 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:06:15.824001 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:06:15.824015 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:06:15.824029 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:15.824042 | orchestrator | 2026-02-03 06:06:15.824055 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:06:15.824068 | orchestrator | Tuesday 03 February 2026 06:06:02 +0000 (0:00:01.642) 0:11:15.496 ****** 2026-02-03 06:06:15.824082 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:15.824100 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:15.824131 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:15.824146 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:15.824168 | orchestrator | 2026-02-03 06:06:15.824180 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:06:15.824190 | orchestrator | Tuesday 03 February 2026 06:06:03 +0000 (0:00:01.256) 0:11:16.752 ****** 2026-02-03 06:06:15.824204 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:05:56.662538', 'end': '2026-02-03 06:05:56.707088', 'delta': '0:00:00.044550', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:06:15.824218 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '9e707d2df2a9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:05:57.276124', 'end': '2026-02-03 06:05:57.327397', 'delta': '0:00:00.051273', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9e707d2df2a9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:06:15.824234 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '7edf8d69a692', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:05:57.864317', 'end': '2026-02-03 06:05:57.922362', 'delta': '0:00:00.058045', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7edf8d69a692'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:06:15.824246 | orchestrator | 2026-02-03 06:06:15.824302 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:06:15.824313 | orchestrator | Tuesday 03 February 2026 06:06:04 +0000 (0:00:01.250) 0:11:18.002 ****** 2026-02-03 06:06:15.824324 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:06:15.824335 | orchestrator | 2026-02-03 06:06:15.824346 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:06:15.824357 | orchestrator | Tuesday 03 February 2026 06:06:06 +0000 (0:00:01.365) 0:11:19.368 ****** 2026-02-03 06:06:15.824368 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:15.824379 | orchestrator | 2026-02-03 06:06:15.824390 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:06:15.824401 | orchestrator | Tuesday 03 February 2026 06:06:07 +0000 (0:00:01.320) 0:11:20.689 ****** 2026-02-03 06:06:15.824412 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:06:15.824423 | orchestrator | 2026-02-03 06:06:15.824433 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:06:15.824444 | orchestrator | Tuesday 03 February 2026 06:06:08 +0000 (0:00:01.185) 0:11:21.874 ****** 2026-02-03 06:06:15.824455 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:06:15.824466 | orchestrator | 2026-02-03 06:06:15.824477 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:06:15.824488 | orchestrator | Tuesday 03 February 2026 06:06:11 +0000 (0:00:03.078) 0:11:24.953 ****** 2026-02-03 06:06:15.824507 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:06:15.824518 | orchestrator | 2026-02-03 06:06:15.824529 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:06:15.824540 | orchestrator | Tuesday 03 February 2026 06:06:13 +0000 (0:00:01.293) 0:11:26.247 ****** 2026-02-03 06:06:15.824551 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:15.824562 | orchestrator | 2026-02-03 06:06:15.824576 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:06:15.824596 | orchestrator | Tuesday 03 February 2026 06:06:14 +0000 (0:00:01.320) 0:11:27.567 ****** 2026-02-03 06:06:15.824608 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:15.824618 | orchestrator | 2026-02-03 06:06:15.824629 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:06:15.824648 | orchestrator | Tuesday 03 February 2026 06:06:15 +0000 (0:00:01.421) 0:11:28.989 ****** 2026-02-03 06:06:26.814466 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:26.814549 | orchestrator | 2026-02-03 06:06:26.814556 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:06:26.814562 | orchestrator | Tuesday 03 February 2026 06:06:16 +0000 (0:00:01.153) 0:11:30.143 ****** 2026-02-03 06:06:26.814566 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:26.814570 | orchestrator | 2026-02-03 06:06:26.814574 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:06:26.814578 | orchestrator | Tuesday 03 February 2026 06:06:18 +0000 (0:00:01.183) 0:11:31.327 ****** 2026-02-03 06:06:26.814582 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:26.814586 | orchestrator | 2026-02-03 06:06:26.814590 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:06:26.814594 | orchestrator | Tuesday 03 February 2026 06:06:19 +0000 (0:00:01.150) 0:11:32.477 ****** 2026-02-03 06:06:26.814598 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:26.814601 | orchestrator | 2026-02-03 06:06:26.814605 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:06:26.814609 | orchestrator | Tuesday 03 February 2026 06:06:20 +0000 (0:00:01.277) 0:11:33.754 ****** 2026-02-03 06:06:26.814613 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:26.814617 | orchestrator | 2026-02-03 06:06:26.814620 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:06:26.814624 | orchestrator | Tuesday 03 February 2026 06:06:21 +0000 (0:00:01.186) 0:11:34.941 ****** 2026-02-03 06:06:26.814629 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:26.814633 | orchestrator | 2026-02-03 06:06:26.814637 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:06:26.814641 | orchestrator | Tuesday 03 February 2026 06:06:22 +0000 (0:00:01.184) 0:11:36.125 ****** 2026-02-03 06:06:26.814645 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:26.814649 | orchestrator | 2026-02-03 06:06:26.814653 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:06:26.814656 | orchestrator | Tuesday 03 February 2026 06:06:24 +0000 (0:00:01.281) 0:11:37.407 ****** 2026-02-03 06:06:26.814662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:06:26.814680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:06:26.814696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:06:26.814702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:06:26.814708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:06:26.814723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:06:26.814728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:06:26.814737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '24352e15', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:06:26.814747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:06:26.814751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:06:26.814755 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:26.814759 | orchestrator | 2026-02-03 06:06:26.814763 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:06:26.814767 | orchestrator | Tuesday 03 February 2026 06:06:25 +0000 (0:00:01.331) 0:11:38.739 ****** 2026-02-03 06:06:26.814775 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298514 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298620 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298654 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298690 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298703 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298715 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298756 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '24352e15', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298781 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298793 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:06:32.298805 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:06:32.298818 | orchestrator | 2026-02-03 06:06:32.298832 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:06:32.298844 | orchestrator | Tuesday 03 February 2026 06:06:26 +0000 (0:00:01.250) 0:11:39.990 ****** 2026-02-03 06:06:32.298857 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:06:32.298870 | orchestrator | 2026-02-03 06:06:32.298881 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:06:32.298893 | orchestrator | Tuesday 03 February 2026 06:06:28 +0000 (0:00:01.617) 0:11:41.608 ****** 2026-02-03 06:06:32.298903 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:06:32.298915 | orchestrator | 2026-02-03 06:06:32.298926 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:06:32.298937 | orchestrator | Tuesday 03 February 2026 06:06:29 +0000 (0:00:01.205) 0:11:42.813 ****** 2026-02-03 06:06:32.298949 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:06:32.298961 | orchestrator | 2026-02-03 06:06:32.298972 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:06:32.298990 | orchestrator | Tuesday 03 February 2026 06:06:32 +0000 (0:00:02.665) 0:11:45.478 ****** 2026-02-03 06:07:15.723780 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.723887 | orchestrator | 2026-02-03 06:07:15.723902 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:07:15.723914 | orchestrator | Tuesday 03 February 2026 06:06:33 +0000 (0:00:01.182) 0:11:46.660 ****** 2026-02-03 06:07:15.723923 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.723933 | orchestrator | 2026-02-03 06:07:15.723942 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:07:15.723951 | orchestrator | Tuesday 03 February 2026 06:06:34 +0000 (0:00:01.320) 0:11:47.981 ****** 2026-02-03 06:07:15.723960 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.723968 | orchestrator | 2026-02-03 06:07:15.723977 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:07:15.723986 | orchestrator | Tuesday 03 February 2026 06:06:36 +0000 (0:00:01.263) 0:11:49.245 ****** 2026-02-03 06:07:15.724016 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-03 06:07:15.724027 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:07:15.724035 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-03 06:07:15.724044 | orchestrator | 2026-02-03 06:07:15.724053 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:07:15.724062 | orchestrator | Tuesday 03 February 2026 06:06:37 +0000 (0:00:01.866) 0:11:51.111 ****** 2026-02-03 06:07:15.724070 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 06:07:15.724079 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 06:07:15.724088 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 06:07:15.724097 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.724106 | orchestrator | 2026-02-03 06:07:15.724115 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:07:15.724124 | orchestrator | Tuesday 03 February 2026 06:06:39 +0000 (0:00:01.307) 0:11:52.418 ****** 2026-02-03 06:07:15.724133 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.724141 | orchestrator | 2026-02-03 06:07:15.724150 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:07:15.724159 | orchestrator | Tuesday 03 February 2026 06:06:40 +0000 (0:00:01.167) 0:11:53.586 ****** 2026-02-03 06:07:15.724168 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:07:15.724177 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:07:15.724186 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:07:15.724195 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:07:15.724219 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:07:15.724228 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:07:15.724236 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:07:15.724246 | orchestrator | 2026-02-03 06:07:15.724254 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:07:15.724295 | orchestrator | Tuesday 03 February 2026 06:06:42 +0000 (0:00:02.295) 0:11:55.882 ****** 2026-02-03 06:07:15.724311 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:07:15.724327 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:07:15.724341 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:07:15.724355 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:07:15.724365 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:07:15.724373 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:07:15.724382 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:07:15.724390 | orchestrator | 2026-02-03 06:07:15.724399 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-03 06:07:15.724408 | orchestrator | Tuesday 03 February 2026 06:06:45 +0000 (0:00:02.638) 0:11:58.520 ****** 2026-02-03 06:07:15.724417 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.724425 | orchestrator | 2026-02-03 06:07:15.724434 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-03 06:07:15.724443 | orchestrator | Tuesday 03 February 2026 06:06:46 +0000 (0:00:01.053) 0:11:59.574 ****** 2026-02-03 06:07:15.724451 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.724460 | orchestrator | 2026-02-03 06:07:15.724469 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-03 06:07:15.724477 | orchestrator | Tuesday 03 February 2026 06:06:47 +0000 (0:00:00.959) 0:12:00.533 ****** 2026-02-03 06:07:15.724496 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.724504 | orchestrator | 2026-02-03 06:07:15.724513 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-03 06:07:15.724522 | orchestrator | Tuesday 03 February 2026 06:06:48 +0000 (0:00:00.830) 0:12:01.364 ****** 2026-02-03 06:07:15.724531 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.724540 | orchestrator | 2026-02-03 06:07:15.724548 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-03 06:07:15.724559 | orchestrator | Tuesday 03 February 2026 06:06:49 +0000 (0:00:01.456) 0:12:02.820 ****** 2026-02-03 06:07:15.724573 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.724588 | orchestrator | 2026-02-03 06:07:15.724601 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-03 06:07:15.724615 | orchestrator | Tuesday 03 February 2026 06:06:50 +0000 (0:00:00.836) 0:12:03.657 ****** 2026-02-03 06:07:15.724648 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 06:07:15.724663 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 06:07:15.724677 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 06:07:15.724692 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.724708 | orchestrator | 2026-02-03 06:07:15.724723 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-03 06:07:15.724738 | orchestrator | Tuesday 03 February 2026 06:06:51 +0000 (0:00:01.142) 0:12:04.800 ****** 2026-02-03 06:07:15.724748 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-03 06:07:15.724757 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-03 06:07:15.724765 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-03 06:07:15.724774 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-03 06:07:15.724783 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-03 06:07:15.724791 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-03 06:07:15.724800 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.724809 | orchestrator | 2026-02-03 06:07:15.724817 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-03 06:07:15.724826 | orchestrator | Tuesday 03 February 2026 06:06:53 +0000 (0:00:01.480) 0:12:06.281 ****** 2026-02-03 06:07:15.724834 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:07:15.724844 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:07:15.724855 | orchestrator | 2026-02-03 06:07:15.724865 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-03 06:07:15.724876 | orchestrator | Tuesday 03 February 2026 06:06:56 +0000 (0:00:03.356) 0:12:09.637 ****** 2026-02-03 06:07:15.724887 | orchestrator | changed: [testbed-node-1] 2026-02-03 06:07:15.724898 | orchestrator | 2026-02-03 06:07:15.724909 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:07:15.724919 | orchestrator | Tuesday 03 February 2026 06:06:58 +0000 (0:00:02.243) 0:12:11.880 ****** 2026-02-03 06:07:15.724930 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-03 06:07:15.724942 | orchestrator | 2026-02-03 06:07:15.724953 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:07:15.724964 | orchestrator | Tuesday 03 February 2026 06:06:59 +0000 (0:00:01.241) 0:12:13.122 ****** 2026-02-03 06:07:15.724982 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-03 06:07:15.724993 | orchestrator | 2026-02-03 06:07:15.725004 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:07:15.725015 | orchestrator | Tuesday 03 February 2026 06:07:01 +0000 (0:00:01.205) 0:12:14.327 ****** 2026-02-03 06:07:15.725035 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:15.725046 | orchestrator | 2026-02-03 06:07:15.725057 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:07:15.725067 | orchestrator | Tuesday 03 February 2026 06:07:02 +0000 (0:00:01.555) 0:12:15.883 ****** 2026-02-03 06:07:15.725078 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.725089 | orchestrator | 2026-02-03 06:07:15.725100 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:07:15.725111 | orchestrator | Tuesday 03 February 2026 06:07:03 +0000 (0:00:01.213) 0:12:17.097 ****** 2026-02-03 06:07:15.725122 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.725133 | orchestrator | 2026-02-03 06:07:15.725144 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:07:15.725154 | orchestrator | Tuesday 03 February 2026 06:07:05 +0000 (0:00:01.179) 0:12:18.276 ****** 2026-02-03 06:07:15.725165 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.725176 | orchestrator | 2026-02-03 06:07:15.725187 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:07:15.725198 | orchestrator | Tuesday 03 February 2026 06:07:06 +0000 (0:00:01.374) 0:12:19.650 ****** 2026-02-03 06:07:15.725209 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:15.725220 | orchestrator | 2026-02-03 06:07:15.725231 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:07:15.725242 | orchestrator | Tuesday 03 February 2026 06:07:08 +0000 (0:00:01.690) 0:12:21.341 ****** 2026-02-03 06:07:15.725253 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.725306 | orchestrator | 2026-02-03 06:07:15.725317 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:07:15.725329 | orchestrator | Tuesday 03 February 2026 06:07:09 +0000 (0:00:01.217) 0:12:22.559 ****** 2026-02-03 06:07:15.725340 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.725351 | orchestrator | 2026-02-03 06:07:15.725361 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:07:15.725372 | orchestrator | Tuesday 03 February 2026 06:07:10 +0000 (0:00:01.298) 0:12:23.858 ****** 2026-02-03 06:07:15.725383 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:15.725394 | orchestrator | 2026-02-03 06:07:15.725405 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:07:15.725416 | orchestrator | Tuesday 03 February 2026 06:07:12 +0000 (0:00:01.600) 0:12:25.459 ****** 2026-02-03 06:07:15.725427 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:15.725438 | orchestrator | 2026-02-03 06:07:15.725448 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:07:15.725459 | orchestrator | Tuesday 03 February 2026 06:07:13 +0000 (0:00:01.662) 0:12:27.121 ****** 2026-02-03 06:07:15.725470 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:15.725481 | orchestrator | 2026-02-03 06:07:15.725492 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:07:15.725503 | orchestrator | Tuesday 03 February 2026 06:07:14 +0000 (0:00:00.874) 0:12:27.996 ****** 2026-02-03 06:07:15.725523 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:56.947549 | orchestrator | 2026-02-03 06:07:56.947664 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:07:56.947682 | orchestrator | Tuesday 03 February 2026 06:07:15 +0000 (0:00:00.900) 0:12:28.896 ****** 2026-02-03 06:07:56.947695 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.947707 | orchestrator | 2026-02-03 06:07:56.947719 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:07:56.947730 | orchestrator | Tuesday 03 February 2026 06:07:16 +0000 (0:00:00.861) 0:12:29.758 ****** 2026-02-03 06:07:56.947741 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.947753 | orchestrator | 2026-02-03 06:07:56.947764 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:07:56.947775 | orchestrator | Tuesday 03 February 2026 06:07:17 +0000 (0:00:00.845) 0:12:30.603 ****** 2026-02-03 06:07:56.947811 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.947823 | orchestrator | 2026-02-03 06:07:56.947834 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:07:56.947845 | orchestrator | Tuesday 03 February 2026 06:07:18 +0000 (0:00:00.830) 0:12:31.434 ****** 2026-02-03 06:07:56.947856 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.947868 | orchestrator | 2026-02-03 06:07:56.947878 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:07:56.947890 | orchestrator | Tuesday 03 February 2026 06:07:19 +0000 (0:00:00.873) 0:12:32.307 ****** 2026-02-03 06:07:56.947901 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.947912 | orchestrator | 2026-02-03 06:07:56.947922 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:07:56.947934 | orchestrator | Tuesday 03 February 2026 06:07:19 +0000 (0:00:00.802) 0:12:33.109 ****** 2026-02-03 06:07:56.947945 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:56.947956 | orchestrator | 2026-02-03 06:07:56.947967 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:07:56.947978 | orchestrator | Tuesday 03 February 2026 06:07:20 +0000 (0:00:00.939) 0:12:34.049 ****** 2026-02-03 06:07:56.947989 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:56.948001 | orchestrator | 2026-02-03 06:07:56.948012 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:07:56.948023 | orchestrator | Tuesday 03 February 2026 06:07:21 +0000 (0:00:00.837) 0:12:34.887 ****** 2026-02-03 06:07:56.948034 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:56.948045 | orchestrator | 2026-02-03 06:07:56.948056 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:07:56.948067 | orchestrator | Tuesday 03 February 2026 06:07:22 +0000 (0:00:00.832) 0:12:35.719 ****** 2026-02-03 06:07:56.948078 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948091 | orchestrator | 2026-02-03 06:07:56.948120 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:07:56.948134 | orchestrator | Tuesday 03 February 2026 06:07:23 +0000 (0:00:00.820) 0:12:36.540 ****** 2026-02-03 06:07:56.948148 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948162 | orchestrator | 2026-02-03 06:07:56.948174 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:07:56.948187 | orchestrator | Tuesday 03 February 2026 06:07:24 +0000 (0:00:00.767) 0:12:37.307 ****** 2026-02-03 06:07:56.948201 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948215 | orchestrator | 2026-02-03 06:07:56.948228 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:07:56.948241 | orchestrator | Tuesday 03 February 2026 06:07:24 +0000 (0:00:00.834) 0:12:38.142 ****** 2026-02-03 06:07:56.948254 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948288 | orchestrator | 2026-02-03 06:07:56.948301 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:07:56.948314 | orchestrator | Tuesday 03 February 2026 06:07:25 +0000 (0:00:00.864) 0:12:39.006 ****** 2026-02-03 06:07:56.948327 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948340 | orchestrator | 2026-02-03 06:07:56.948353 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:07:56.948366 | orchestrator | Tuesday 03 February 2026 06:07:26 +0000 (0:00:00.866) 0:12:39.873 ****** 2026-02-03 06:07:56.948379 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948391 | orchestrator | 2026-02-03 06:07:56.948405 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:07:56.948418 | orchestrator | Tuesday 03 February 2026 06:07:27 +0000 (0:00:00.818) 0:12:40.692 ****** 2026-02-03 06:07:56.948432 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948467 | orchestrator | 2026-02-03 06:07:56.948478 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:07:56.948490 | orchestrator | Tuesday 03 February 2026 06:07:28 +0000 (0:00:00.824) 0:12:41.516 ****** 2026-02-03 06:07:56.948510 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948521 | orchestrator | 2026-02-03 06:07:56.948532 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:07:56.948543 | orchestrator | Tuesday 03 February 2026 06:07:29 +0000 (0:00:00.913) 0:12:42.430 ****** 2026-02-03 06:07:56.948554 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948565 | orchestrator | 2026-02-03 06:07:56.948576 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:07:56.948587 | orchestrator | Tuesday 03 February 2026 06:07:30 +0000 (0:00:00.862) 0:12:43.292 ****** 2026-02-03 06:07:56.948598 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948609 | orchestrator | 2026-02-03 06:07:56.948620 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:07:56.948631 | orchestrator | Tuesday 03 February 2026 06:07:30 +0000 (0:00:00.867) 0:12:44.160 ****** 2026-02-03 06:07:56.948642 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948653 | orchestrator | 2026-02-03 06:07:56.948664 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:07:56.948675 | orchestrator | Tuesday 03 February 2026 06:07:31 +0000 (0:00:00.889) 0:12:45.049 ****** 2026-02-03 06:07:56.948686 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948697 | orchestrator | 2026-02-03 06:07:56.948726 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:07:56.948738 | orchestrator | Tuesday 03 February 2026 06:07:32 +0000 (0:00:00.916) 0:12:45.966 ****** 2026-02-03 06:07:56.948749 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:56.948760 | orchestrator | 2026-02-03 06:07:56.948771 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:07:56.948783 | orchestrator | Tuesday 03 February 2026 06:07:34 +0000 (0:00:01.750) 0:12:47.717 ****** 2026-02-03 06:07:56.948793 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:56.948804 | orchestrator | 2026-02-03 06:07:56.948815 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:07:56.948826 | orchestrator | Tuesday 03 February 2026 06:07:36 +0000 (0:00:02.296) 0:12:50.013 ****** 2026-02-03 06:07:56.948837 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-03 06:07:56.948849 | orchestrator | 2026-02-03 06:07:56.948860 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:07:56.948871 | orchestrator | Tuesday 03 February 2026 06:07:37 +0000 (0:00:01.169) 0:12:51.183 ****** 2026-02-03 06:07:56.948882 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948893 | orchestrator | 2026-02-03 06:07:56.948904 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:07:56.948915 | orchestrator | Tuesday 03 February 2026 06:07:39 +0000 (0:00:01.212) 0:12:52.395 ****** 2026-02-03 06:07:56.948926 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.948937 | orchestrator | 2026-02-03 06:07:56.948948 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:07:56.948959 | orchestrator | Tuesday 03 February 2026 06:07:40 +0000 (0:00:01.210) 0:12:53.606 ****** 2026-02-03 06:07:56.948969 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:07:56.948981 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:07:56.948992 | orchestrator | 2026-02-03 06:07:56.949002 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:07:56.949013 | orchestrator | Tuesday 03 February 2026 06:07:42 +0000 (0:00:01.969) 0:12:55.575 ****** 2026-02-03 06:07:56.949024 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:56.949035 | orchestrator | 2026-02-03 06:07:56.949046 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:07:56.949057 | orchestrator | Tuesday 03 February 2026 06:07:43 +0000 (0:00:01.567) 0:12:57.143 ****** 2026-02-03 06:07:56.949068 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.949086 | orchestrator | 2026-02-03 06:07:56.949097 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:07:56.949113 | orchestrator | Tuesday 03 February 2026 06:07:45 +0000 (0:00:01.292) 0:12:58.436 ****** 2026-02-03 06:07:56.949124 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.949136 | orchestrator | 2026-02-03 06:07:56.949147 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:07:56.949158 | orchestrator | Tuesday 03 February 2026 06:07:46 +0000 (0:00:00.874) 0:12:59.310 ****** 2026-02-03 06:07:56.949168 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.949179 | orchestrator | 2026-02-03 06:07:56.949190 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:07:56.949201 | orchestrator | Tuesday 03 February 2026 06:07:47 +0000 (0:00:00.905) 0:13:00.215 ****** 2026-02-03 06:07:56.949212 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-03 06:07:56.949223 | orchestrator | 2026-02-03 06:07:56.949234 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:07:56.949244 | orchestrator | Tuesday 03 February 2026 06:07:48 +0000 (0:00:01.145) 0:13:01.361 ****** 2026-02-03 06:07:56.949255 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:07:56.949294 | orchestrator | 2026-02-03 06:07:56.949306 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:07:56.949317 | orchestrator | Tuesday 03 February 2026 06:07:50 +0000 (0:00:01.835) 0:13:03.197 ****** 2026-02-03 06:07:56.949328 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:07:56.949339 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:07:56.949349 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:07:56.949360 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.949371 | orchestrator | 2026-02-03 06:07:56.949382 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:07:56.949393 | orchestrator | Tuesday 03 February 2026 06:07:51 +0000 (0:00:01.238) 0:13:04.436 ****** 2026-02-03 06:07:56.949404 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.949415 | orchestrator | 2026-02-03 06:07:56.949426 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:07:56.949436 | orchestrator | Tuesday 03 February 2026 06:07:52 +0000 (0:00:01.178) 0:13:05.615 ****** 2026-02-03 06:07:56.949447 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.949458 | orchestrator | 2026-02-03 06:07:56.949469 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:07:56.949480 | orchestrator | Tuesday 03 February 2026 06:07:53 +0000 (0:00:01.185) 0:13:06.800 ****** 2026-02-03 06:07:56.949491 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.949502 | orchestrator | 2026-02-03 06:07:56.949513 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:07:56.949524 | orchestrator | Tuesday 03 February 2026 06:07:54 +0000 (0:00:01.180) 0:13:07.981 ****** 2026-02-03 06:07:56.949535 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.949546 | orchestrator | 2026-02-03 06:07:56.949557 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:07:56.949568 | orchestrator | Tuesday 03 February 2026 06:07:56 +0000 (0:00:01.301) 0:13:09.283 ****** 2026-02-03 06:07:56.949579 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:07:56.949590 | orchestrator | 2026-02-03 06:07:56.949608 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:08:37.813604 | orchestrator | Tuesday 03 February 2026 06:07:56 +0000 (0:00:00.833) 0:13:10.116 ****** 2026-02-03 06:08:37.813721 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:08:37.813739 | orchestrator | 2026-02-03 06:08:37.813753 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:08:37.813765 | orchestrator | Tuesday 03 February 2026 06:07:59 +0000 (0:00:02.234) 0:13:12.350 ****** 2026-02-03 06:08:37.813799 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:08:37.813811 | orchestrator | 2026-02-03 06:08:37.813823 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:08:37.813834 | orchestrator | Tuesday 03 February 2026 06:08:00 +0000 (0:00:00.866) 0:13:13.217 ****** 2026-02-03 06:08:37.813845 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-03 06:08:37.813856 | orchestrator | 2026-02-03 06:08:37.813867 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:08:37.813877 | orchestrator | Tuesday 03 February 2026 06:08:01 +0000 (0:00:01.361) 0:13:14.579 ****** 2026-02-03 06:08:37.813888 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.813900 | orchestrator | 2026-02-03 06:08:37.813911 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:08:37.813922 | orchestrator | Tuesday 03 February 2026 06:08:02 +0000 (0:00:01.177) 0:13:15.756 ****** 2026-02-03 06:08:37.813933 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.813944 | orchestrator | 2026-02-03 06:08:37.813960 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:08:37.813980 | orchestrator | Tuesday 03 February 2026 06:08:03 +0000 (0:00:01.246) 0:13:17.003 ****** 2026-02-03 06:08:37.814000 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.814081 | orchestrator | 2026-02-03 06:08:37.814103 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:08:37.814121 | orchestrator | Tuesday 03 February 2026 06:08:05 +0000 (0:00:01.238) 0:13:18.242 ****** 2026-02-03 06:08:37.814140 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.814159 | orchestrator | 2026-02-03 06:08:37.814192 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:08:37.814210 | orchestrator | Tuesday 03 February 2026 06:08:06 +0000 (0:00:01.248) 0:13:19.491 ****** 2026-02-03 06:08:37.814229 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.814247 | orchestrator | 2026-02-03 06:08:37.814292 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:08:37.814314 | orchestrator | Tuesday 03 February 2026 06:08:07 +0000 (0:00:01.192) 0:13:20.683 ****** 2026-02-03 06:08:37.814334 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.814352 | orchestrator | 2026-02-03 06:08:37.814391 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:08:37.814411 | orchestrator | Tuesday 03 February 2026 06:08:08 +0000 (0:00:01.182) 0:13:21.866 ****** 2026-02-03 06:08:37.814429 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.814451 | orchestrator | 2026-02-03 06:08:37.814470 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:08:37.814487 | orchestrator | Tuesday 03 February 2026 06:08:09 +0000 (0:00:01.201) 0:13:23.067 ****** 2026-02-03 06:08:37.814501 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.814512 | orchestrator | 2026-02-03 06:08:37.814522 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:08:37.814533 | orchestrator | Tuesday 03 February 2026 06:08:11 +0000 (0:00:01.254) 0:13:24.321 ****** 2026-02-03 06:08:37.814544 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:08:37.814555 | orchestrator | 2026-02-03 06:08:37.814566 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:08:37.814576 | orchestrator | Tuesday 03 February 2026 06:08:11 +0000 (0:00:00.847) 0:13:25.169 ****** 2026-02-03 06:08:37.814588 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-03 06:08:37.814599 | orchestrator | 2026-02-03 06:08:37.814610 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:08:37.814621 | orchestrator | Tuesday 03 February 2026 06:08:13 +0000 (0:00:01.188) 0:13:26.358 ****** 2026-02-03 06:08:37.814632 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-03 06:08:37.814659 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-03 06:08:37.814670 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-03 06:08:37.814681 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-03 06:08:37.814692 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-03 06:08:37.814702 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-03 06:08:37.814713 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-03 06:08:37.814724 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:08:37.814735 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:08:37.814745 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:08:37.814756 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:08:37.814767 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:08:37.814778 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:08:37.814789 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:08:37.814799 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-03 06:08:37.814810 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-03 06:08:37.814821 | orchestrator | 2026-02-03 06:08:37.814832 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:08:37.814843 | orchestrator | Tuesday 03 February 2026 06:08:19 +0000 (0:00:06.787) 0:13:33.146 ****** 2026-02-03 06:08:37.814854 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.814865 | orchestrator | 2026-02-03 06:08:37.814883 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:08:37.814923 | orchestrator | Tuesday 03 February 2026 06:08:20 +0000 (0:00:00.837) 0:13:33.983 ****** 2026-02-03 06:08:37.814943 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.814961 | orchestrator | 2026-02-03 06:08:37.814979 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:08:37.814997 | orchestrator | Tuesday 03 February 2026 06:08:21 +0000 (0:00:00.903) 0:13:34.887 ****** 2026-02-03 06:08:37.815017 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815037 | orchestrator | 2026-02-03 06:08:37.815055 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:08:37.815070 | orchestrator | Tuesday 03 February 2026 06:08:22 +0000 (0:00:00.827) 0:13:35.715 ****** 2026-02-03 06:08:37.815081 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815092 | orchestrator | 2026-02-03 06:08:37.815103 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:08:37.815114 | orchestrator | Tuesday 03 February 2026 06:08:23 +0000 (0:00:00.838) 0:13:36.553 ****** 2026-02-03 06:08:37.815127 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815146 | orchestrator | 2026-02-03 06:08:37.815165 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:08:37.815183 | orchestrator | Tuesday 03 February 2026 06:08:24 +0000 (0:00:00.805) 0:13:37.359 ****** 2026-02-03 06:08:37.815201 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815212 | orchestrator | 2026-02-03 06:08:37.815223 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:08:37.815234 | orchestrator | Tuesday 03 February 2026 06:08:25 +0000 (0:00:00.844) 0:13:38.204 ****** 2026-02-03 06:08:37.815245 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815256 | orchestrator | 2026-02-03 06:08:37.815299 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:08:37.815313 | orchestrator | Tuesday 03 February 2026 06:08:25 +0000 (0:00:00.847) 0:13:39.051 ****** 2026-02-03 06:08:37.815324 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815334 | orchestrator | 2026-02-03 06:08:37.815345 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:08:37.815366 | orchestrator | Tuesday 03 February 2026 06:08:26 +0000 (0:00:00.836) 0:13:39.888 ****** 2026-02-03 06:08:37.815376 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815387 | orchestrator | 2026-02-03 06:08:37.815398 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:08:37.815408 | orchestrator | Tuesday 03 February 2026 06:08:27 +0000 (0:00:00.812) 0:13:40.700 ****** 2026-02-03 06:08:37.815419 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815430 | orchestrator | 2026-02-03 06:08:37.815447 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:08:37.815459 | orchestrator | Tuesday 03 February 2026 06:08:28 +0000 (0:00:00.829) 0:13:41.530 ****** 2026-02-03 06:08:37.815469 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815480 | orchestrator | 2026-02-03 06:08:37.815491 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:08:37.815502 | orchestrator | Tuesday 03 February 2026 06:08:29 +0000 (0:00:00.821) 0:13:42.351 ****** 2026-02-03 06:08:37.815513 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815523 | orchestrator | 2026-02-03 06:08:37.815534 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:08:37.815545 | orchestrator | Tuesday 03 February 2026 06:08:30 +0000 (0:00:00.875) 0:13:43.227 ****** 2026-02-03 06:08:37.815555 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815566 | orchestrator | 2026-02-03 06:08:37.815577 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:08:37.815588 | orchestrator | Tuesday 03 February 2026 06:08:30 +0000 (0:00:00.873) 0:13:44.101 ****** 2026-02-03 06:08:37.815599 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815610 | orchestrator | 2026-02-03 06:08:37.815621 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:08:37.815631 | orchestrator | Tuesday 03 February 2026 06:08:31 +0000 (0:00:00.895) 0:13:44.996 ****** 2026-02-03 06:08:37.815642 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815653 | orchestrator | 2026-02-03 06:08:37.815663 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:08:37.815674 | orchestrator | Tuesday 03 February 2026 06:08:32 +0000 (0:00:00.961) 0:13:45.957 ****** 2026-02-03 06:08:37.815685 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815696 | orchestrator | 2026-02-03 06:08:37.815706 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:08:37.815717 | orchestrator | Tuesday 03 February 2026 06:08:33 +0000 (0:00:00.828) 0:13:46.786 ****** 2026-02-03 06:08:37.815727 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815738 | orchestrator | 2026-02-03 06:08:37.815749 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:08:37.815762 | orchestrator | Tuesday 03 February 2026 06:08:34 +0000 (0:00:00.789) 0:13:47.576 ****** 2026-02-03 06:08:37.815772 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815783 | orchestrator | 2026-02-03 06:08:37.815794 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:08:37.815805 | orchestrator | Tuesday 03 February 2026 06:08:35 +0000 (0:00:00.834) 0:13:48.411 ****** 2026-02-03 06:08:37.815816 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815827 | orchestrator | 2026-02-03 06:08:37.815837 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:08:37.815848 | orchestrator | Tuesday 03 February 2026 06:08:36 +0000 (0:00:00.859) 0:13:49.270 ****** 2026-02-03 06:08:37.815858 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815869 | orchestrator | 2026-02-03 06:08:37.815880 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:08:37.815890 | orchestrator | Tuesday 03 February 2026 06:08:36 +0000 (0:00:00.879) 0:13:50.149 ****** 2026-02-03 06:08:37.815901 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:08:37.815920 | orchestrator | 2026-02-03 06:08:37.815943 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:09:57.133962 | orchestrator | Tuesday 03 February 2026 06:08:37 +0000 (0:00:00.835) 0:13:50.985 ****** 2026-02-03 06:09:57.134142 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 06:09:57.134162 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 06:09:57.134175 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 06:09:57.134186 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:09:57.134198 | orchestrator | 2026-02-03 06:09:57.134210 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:09:57.134221 | orchestrator | Tuesday 03 February 2026 06:08:38 +0000 (0:00:01.129) 0:13:52.114 ****** 2026-02-03 06:09:57.134232 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 06:09:57.134243 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 06:09:57.134254 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 06:09:57.134265 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:09:57.134324 | orchestrator | 2026-02-03 06:09:57.134339 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:09:57.134350 | orchestrator | Tuesday 03 February 2026 06:08:40 +0000 (0:00:01.272) 0:13:53.387 ****** 2026-02-03 06:09:57.134361 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 06:09:57.134372 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 06:09:57.134383 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 06:09:57.134394 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:09:57.134405 | orchestrator | 2026-02-03 06:09:57.134416 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:09:57.134427 | orchestrator | Tuesday 03 February 2026 06:08:41 +0000 (0:00:01.162) 0:13:54.549 ****** 2026-02-03 06:09:57.134438 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:09:57.134449 | orchestrator | 2026-02-03 06:09:57.134460 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:09:57.134471 | orchestrator | Tuesday 03 February 2026 06:08:42 +0000 (0:00:00.845) 0:13:55.395 ****** 2026-02-03 06:09:57.134483 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-03 06:09:57.134494 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:09:57.134509 | orchestrator | 2026-02-03 06:09:57.134522 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:09:57.134535 | orchestrator | Tuesday 03 February 2026 06:08:43 +0000 (0:00:01.126) 0:13:56.521 ****** 2026-02-03 06:09:57.134548 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.134561 | orchestrator | 2026-02-03 06:09:57.134591 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:09:57.134605 | orchestrator | Tuesday 03 February 2026 06:08:44 +0000 (0:00:01.545) 0:13:58.067 ****** 2026-02-03 06:09:57.134618 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.134631 | orchestrator | 2026-02-03 06:09:57.134644 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-03 06:09:57.134657 | orchestrator | Tuesday 03 February 2026 06:08:45 +0000 (0:00:00.988) 0:13:59.056 ****** 2026-02-03 06:09:57.134670 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-03 06:09:57.134683 | orchestrator | 2026-02-03 06:09:57.134696 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-03 06:09:57.134709 | orchestrator | Tuesday 03 February 2026 06:08:47 +0000 (0:00:01.251) 0:14:00.307 ****** 2026-02-03 06:09:57.134721 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-03 06:09:57.134735 | orchestrator | 2026-02-03 06:09:57.134748 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-03 06:09:57.134761 | orchestrator | Tuesday 03 February 2026 06:08:50 +0000 (0:00:03.294) 0:14:03.601 ****** 2026-02-03 06:09:57.134799 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:09:57.134813 | orchestrator | 2026-02-03 06:09:57.134825 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-03 06:09:57.134839 | orchestrator | Tuesday 03 February 2026 06:08:51 +0000 (0:00:01.239) 0:14:04.840 ****** 2026-02-03 06:09:57.134852 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.134866 | orchestrator | 2026-02-03 06:09:57.134877 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-03 06:09:57.134888 | orchestrator | Tuesday 03 February 2026 06:08:52 +0000 (0:00:01.240) 0:14:06.081 ****** 2026-02-03 06:09:57.134899 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.134911 | orchestrator | 2026-02-03 06:09:57.134930 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-03 06:09:57.134960 | orchestrator | Tuesday 03 February 2026 06:08:54 +0000 (0:00:01.245) 0:14:07.326 ****** 2026-02-03 06:09:57.134980 | orchestrator | changed: [testbed-node-1] 2026-02-03 06:09:57.134998 | orchestrator | 2026-02-03 06:09:57.135050 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-03 06:09:57.135068 | orchestrator | Tuesday 03 February 2026 06:08:56 +0000 (0:00:02.255) 0:14:09.582 ****** 2026-02-03 06:09:57.135085 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.135102 | orchestrator | 2026-02-03 06:09:57.135120 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-03 06:09:57.135139 | orchestrator | Tuesday 03 February 2026 06:08:58 +0000 (0:00:01.704) 0:14:11.286 ****** 2026-02-03 06:09:57.135158 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.135176 | orchestrator | 2026-02-03 06:09:57.135195 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-03 06:09:57.135214 | orchestrator | Tuesday 03 February 2026 06:08:59 +0000 (0:00:01.611) 0:14:12.898 ****** 2026-02-03 06:09:57.135232 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.135246 | orchestrator | 2026-02-03 06:09:57.135258 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-03 06:09:57.135328 | orchestrator | Tuesday 03 February 2026 06:09:01 +0000 (0:00:01.647) 0:14:14.545 ****** 2026-02-03 06:09:57.135351 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:09:57.135369 | orchestrator | 2026-02-03 06:09:57.135414 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-03 06:09:57.135435 | orchestrator | Tuesday 03 February 2026 06:09:03 +0000 (0:00:01.664) 0:14:16.210 ****** 2026-02-03 06:09:57.135454 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:09:57.135472 | orchestrator | 2026-02-03 06:09:57.135486 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-03 06:09:57.135497 | orchestrator | Tuesday 03 February 2026 06:09:04 +0000 (0:00:01.788) 0:14:17.998 ****** 2026-02-03 06:09:57.135509 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:09:57.135520 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-03 06:09:57.135531 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 06:09:57.135542 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-03 06:09:57.135552 | orchestrator | 2026-02-03 06:09:57.135563 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-03 06:09:57.135574 | orchestrator | Tuesday 03 February 2026 06:09:08 +0000 (0:00:04.048) 0:14:22.047 ****** 2026-02-03 06:09:57.135584 | orchestrator | changed: [testbed-node-1] 2026-02-03 06:09:57.135595 | orchestrator | 2026-02-03 06:09:57.135606 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-03 06:09:57.135617 | orchestrator | Tuesday 03 February 2026 06:09:10 +0000 (0:00:02.139) 0:14:24.186 ****** 2026-02-03 06:09:57.135628 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.135639 | orchestrator | 2026-02-03 06:09:57.135649 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-03 06:09:57.135660 | orchestrator | Tuesday 03 February 2026 06:09:12 +0000 (0:00:01.271) 0:14:25.458 ****** 2026-02-03 06:09:57.135687 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.135698 | orchestrator | 2026-02-03 06:09:57.135709 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-03 06:09:57.135720 | orchestrator | Tuesday 03 February 2026 06:09:13 +0000 (0:00:01.232) 0:14:26.690 ****** 2026-02-03 06:09:57.135731 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.135742 | orchestrator | 2026-02-03 06:09:57.135752 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-03 06:09:57.135763 | orchestrator | Tuesday 03 February 2026 06:09:15 +0000 (0:00:01.834) 0:14:28.524 ****** 2026-02-03 06:09:57.135774 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.135785 | orchestrator | 2026-02-03 06:09:57.135795 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-03 06:09:57.135806 | orchestrator | Tuesday 03 February 2026 06:09:17 +0000 (0:00:01.664) 0:14:30.189 ****** 2026-02-03 06:09:57.135825 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:09:57.135837 | orchestrator | 2026-02-03 06:09:57.135848 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-03 06:09:57.135858 | orchestrator | Tuesday 03 February 2026 06:09:17 +0000 (0:00:00.802) 0:14:30.991 ****** 2026-02-03 06:09:57.135869 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-03 06:09:57.135880 | orchestrator | 2026-02-03 06:09:57.135891 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-03 06:09:57.135905 | orchestrator | Tuesday 03 February 2026 06:09:18 +0000 (0:00:01.165) 0:14:32.157 ****** 2026-02-03 06:09:57.135924 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:09:57.135942 | orchestrator | 2026-02-03 06:09:57.135960 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-03 06:09:57.135977 | orchestrator | Tuesday 03 February 2026 06:09:20 +0000 (0:00:01.187) 0:14:33.345 ****** 2026-02-03 06:09:57.135994 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:09:57.136013 | orchestrator | 2026-02-03 06:09:57.136031 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-03 06:09:57.136051 | orchestrator | Tuesday 03 February 2026 06:09:21 +0000 (0:00:01.248) 0:14:34.594 ****** 2026-02-03 06:09:57.136071 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-03 06:09:57.136088 | orchestrator | 2026-02-03 06:09:57.136104 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-03 06:09:57.136116 | orchestrator | Tuesday 03 February 2026 06:09:22 +0000 (0:00:01.206) 0:14:35.801 ****** 2026-02-03 06:09:57.136127 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.136138 | orchestrator | 2026-02-03 06:09:57.136149 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-03 06:09:57.136159 | orchestrator | Tuesday 03 February 2026 06:09:25 +0000 (0:00:02.495) 0:14:38.297 ****** 2026-02-03 06:09:57.136170 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.136182 | orchestrator | 2026-02-03 06:09:57.136192 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-03 06:09:57.136203 | orchestrator | Tuesday 03 February 2026 06:09:27 +0000 (0:00:02.001) 0:14:40.299 ****** 2026-02-03 06:09:57.136215 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.136226 | orchestrator | 2026-02-03 06:09:57.136237 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-03 06:09:57.136247 | orchestrator | Tuesday 03 February 2026 06:09:29 +0000 (0:00:02.581) 0:14:42.880 ****** 2026-02-03 06:09:57.136258 | orchestrator | changed: [testbed-node-1] 2026-02-03 06:09:57.136269 | orchestrator | 2026-02-03 06:09:57.136308 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-03 06:09:57.136320 | orchestrator | Tuesday 03 February 2026 06:09:32 +0000 (0:00:02.982) 0:14:45.862 ****** 2026-02-03 06:09:57.136331 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-03 06:09:57.136342 | orchestrator | 2026-02-03 06:09:57.136352 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-03 06:09:57.136373 | orchestrator | Tuesday 03 February 2026 06:09:33 +0000 (0:00:01.174) 0:14:47.037 ****** 2026-02-03 06:09:57.136383 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-03 06:09:57.136395 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:09:57.136406 | orchestrator | 2026-02-03 06:09:57.136416 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-03 06:09:57.136437 | orchestrator | Tuesday 03 February 2026 06:09:57 +0000 (0:00:23.271) 0:15:10.308 ****** 2026-02-03 06:10:43.517221 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:10:43.517441 | orchestrator | 2026-02-03 06:10:43.517457 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-03 06:10:43.517469 | orchestrator | Tuesday 03 February 2026 06:09:59 +0000 (0:00:02.795) 0:15:13.104 ****** 2026-02-03 06:10:43.517479 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:10:43.517490 | orchestrator | 2026-02-03 06:10:43.517500 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-03 06:10:43.517510 | orchestrator | Tuesday 03 February 2026 06:10:00 +0000 (0:00:00.895) 0:15:13.999 ****** 2026-02-03 06:10:43.517521 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-03 06:10:43.517568 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-03 06:10:43.517579 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-03 06:10:43.517606 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-03 06:10:43.517619 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-03 06:10:43.517630 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}])  2026-02-03 06:10:43.517642 | orchestrator | 2026-02-03 06:10:43.517652 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-03 06:10:43.517662 | orchestrator | Tuesday 03 February 2026 06:10:10 +0000 (0:00:09.947) 0:15:23.946 ****** 2026-02-03 06:10:43.517671 | orchestrator | changed: [testbed-node-1] 2026-02-03 06:10:43.517681 | orchestrator | 2026-02-03 06:10:43.517691 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:10:43.517723 | orchestrator | Tuesday 03 February 2026 06:10:13 +0000 (0:00:02.378) 0:15:26.325 ****** 2026-02-03 06:10:43.517752 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:10:43.517764 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-03 06:10:43.517776 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-03 06:10:43.517787 | orchestrator | 2026-02-03 06:10:43.517797 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:10:43.517806 | orchestrator | Tuesday 03 February 2026 06:10:14 +0000 (0:00:01.748) 0:15:28.074 ****** 2026-02-03 06:10:43.517816 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 06:10:43.517826 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 06:10:43.517835 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 06:10:43.517845 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:10:43.517854 | orchestrator | 2026-02-03 06:10:43.517864 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-03 06:10:43.517874 | orchestrator | Tuesday 03 February 2026 06:10:16 +0000 (0:00:01.232) 0:15:29.306 ****** 2026-02-03 06:10:43.517883 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:10:43.517893 | orchestrator | 2026-02-03 06:10:43.517902 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-03 06:10:43.517928 | orchestrator | Tuesday 03 February 2026 06:10:16 +0000 (0:00:00.867) 0:15:30.174 ****** 2026-02-03 06:10:43.517939 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:10:43.517949 | orchestrator | 2026-02-03 06:10:43.517958 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-03 06:10:43.517968 | orchestrator | 2026-02-03 06:10:43.517977 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-03 06:10:43.517987 | orchestrator | Tuesday 03 February 2026 06:10:19 +0000 (0:00:02.305) 0:15:32.479 ****** 2026-02-03 06:10:43.517996 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518006 | orchestrator | 2026-02-03 06:10:43.518068 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-03 06:10:43.518079 | orchestrator | Tuesday 03 February 2026 06:10:20 +0000 (0:00:01.233) 0:15:33.713 ****** 2026-02-03 06:10:43.518089 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518098 | orchestrator | 2026-02-03 06:10:43.518108 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-03 06:10:43.518118 | orchestrator | Tuesday 03 February 2026 06:10:21 +0000 (0:00:00.781) 0:15:34.494 ****** 2026-02-03 06:10:43.518127 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:10:43.518137 | orchestrator | 2026-02-03 06:10:43.518146 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-03 06:10:43.518156 | orchestrator | Tuesday 03 February 2026 06:10:22 +0000 (0:00:00.801) 0:15:35.296 ****** 2026-02-03 06:10:43.518165 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518175 | orchestrator | 2026-02-03 06:10:43.518184 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:10:43.518194 | orchestrator | Tuesday 03 February 2026 06:10:22 +0000 (0:00:00.803) 0:15:36.100 ****** 2026-02-03 06:10:43.518204 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-03 06:10:43.518213 | orchestrator | 2026-02-03 06:10:43.518223 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:10:43.518232 | orchestrator | Tuesday 03 February 2026 06:10:24 +0000 (0:00:01.367) 0:15:37.467 ****** 2026-02-03 06:10:43.518242 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518251 | orchestrator | 2026-02-03 06:10:43.518261 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:10:43.518270 | orchestrator | Tuesday 03 February 2026 06:10:25 +0000 (0:00:01.571) 0:15:39.039 ****** 2026-02-03 06:10:43.518307 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518318 | orchestrator | 2026-02-03 06:10:43.518328 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:10:43.518343 | orchestrator | Tuesday 03 February 2026 06:10:27 +0000 (0:00:01.188) 0:15:40.227 ****** 2026-02-03 06:10:43.518353 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518363 | orchestrator | 2026-02-03 06:10:43.518373 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:10:43.518382 | orchestrator | Tuesday 03 February 2026 06:10:28 +0000 (0:00:01.525) 0:15:41.752 ****** 2026-02-03 06:10:43.518392 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518401 | orchestrator | 2026-02-03 06:10:43.518411 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:10:43.518420 | orchestrator | Tuesday 03 February 2026 06:10:29 +0000 (0:00:01.204) 0:15:42.956 ****** 2026-02-03 06:10:43.518430 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518439 | orchestrator | 2026-02-03 06:10:43.518449 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:10:43.518458 | orchestrator | Tuesday 03 February 2026 06:10:31 +0000 (0:00:01.267) 0:15:44.224 ****** 2026-02-03 06:10:43.518468 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518478 | orchestrator | 2026-02-03 06:10:43.518487 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:10:43.518497 | orchestrator | Tuesday 03 February 2026 06:10:32 +0000 (0:00:01.223) 0:15:45.448 ****** 2026-02-03 06:10:43.518506 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:10:43.518516 | orchestrator | 2026-02-03 06:10:43.518525 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:10:43.518535 | orchestrator | Tuesday 03 February 2026 06:10:33 +0000 (0:00:01.238) 0:15:46.686 ****** 2026-02-03 06:10:43.518544 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518554 | orchestrator | 2026-02-03 06:10:43.518565 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:10:43.518581 | orchestrator | Tuesday 03 February 2026 06:10:34 +0000 (0:00:01.186) 0:15:47.873 ****** 2026-02-03 06:10:43.518597 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:10:43.518613 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:10:43.518628 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:10:43.518645 | orchestrator | 2026-02-03 06:10:43.518661 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:10:43.518677 | orchestrator | Tuesday 03 February 2026 06:10:36 +0000 (0:00:02.208) 0:15:50.081 ****** 2026-02-03 06:10:43.518690 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:10:43.518700 | orchestrator | 2026-02-03 06:10:43.518709 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:10:43.518719 | orchestrator | Tuesday 03 February 2026 06:10:38 +0000 (0:00:01.371) 0:15:51.452 ****** 2026-02-03 06:10:43.518728 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:10:43.518738 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:10:43.518747 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:10:43.518757 | orchestrator | 2026-02-03 06:10:43.518766 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:10:43.518776 | orchestrator | Tuesday 03 February 2026 06:10:41 +0000 (0:00:03.399) 0:15:54.852 ****** 2026-02-03 06:10:43.518785 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 06:10:43.518795 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 06:10:43.518805 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 06:10:43.518822 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.840805 | orchestrator | 2026-02-03 06:11:07.840908 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:11:07.840940 | orchestrator | Tuesday 03 February 2026 06:10:43 +0000 (0:00:01.833) 0:15:56.685 ****** 2026-02-03 06:11:07.840949 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:11:07.840959 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:11:07.840967 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:11:07.840974 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.840986 | orchestrator | 2026-02-03 06:11:07.840998 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:11:07.841009 | orchestrator | Tuesday 03 February 2026 06:10:45 +0000 (0:00:02.452) 0:15:59.138 ****** 2026-02-03 06:11:07.841022 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:07.841051 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:07.841062 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:07.841072 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841082 | orchestrator | 2026-02-03 06:11:07.841092 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:11:07.841102 | orchestrator | Tuesday 03 February 2026 06:10:47 +0000 (0:00:01.254) 0:16:00.392 ****** 2026-02-03 06:11:07.841114 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:10:38.814319', 'end': '2026-02-03 06:10:38.861607', 'delta': '0:00:00.047288', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:11:07.841127 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:10:39.769335', 'end': '2026-02-03 06:10:39.826494', 'delta': '0:00:00.057159', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:11:07.841168 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '7edf8d69a692', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:10:40.459447', 'end': '2026-02-03 06:10:40.509860', 'delta': '0:00:00.050413', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7edf8d69a692'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:11:07.841183 | orchestrator | 2026-02-03 06:11:07.841194 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:11:07.841205 | orchestrator | Tuesday 03 February 2026 06:10:48 +0000 (0:00:01.241) 0:16:01.633 ****** 2026-02-03 06:11:07.841216 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:11:07.841228 | orchestrator | 2026-02-03 06:11:07.841240 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:11:07.841252 | orchestrator | Tuesday 03 February 2026 06:10:49 +0000 (0:00:01.369) 0:16:03.003 ****** 2026-02-03 06:11:07.841263 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841271 | orchestrator | 2026-02-03 06:11:07.841277 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:11:07.841309 | orchestrator | Tuesday 03 February 2026 06:10:51 +0000 (0:00:01.317) 0:16:04.320 ****** 2026-02-03 06:11:07.841317 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:11:07.841324 | orchestrator | 2026-02-03 06:11:07.841331 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:11:07.841338 | orchestrator | Tuesday 03 February 2026 06:10:52 +0000 (0:00:01.221) 0:16:05.542 ****** 2026-02-03 06:11:07.841344 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:11:07.841351 | orchestrator | 2026-02-03 06:11:07.841360 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:11:07.841373 | orchestrator | Tuesday 03 February 2026 06:10:54 +0000 (0:00:02.041) 0:16:07.584 ****** 2026-02-03 06:11:07.841382 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:11:07.841390 | orchestrator | 2026-02-03 06:11:07.841397 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:11:07.841405 | orchestrator | Tuesday 03 February 2026 06:10:55 +0000 (0:00:01.334) 0:16:08.918 ****** 2026-02-03 06:11:07.841413 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841421 | orchestrator | 2026-02-03 06:11:07.841429 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:11:07.841438 | orchestrator | Tuesday 03 February 2026 06:10:56 +0000 (0:00:01.210) 0:16:10.128 ****** 2026-02-03 06:11:07.841446 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841454 | orchestrator | 2026-02-03 06:11:07.841462 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:11:07.841470 | orchestrator | Tuesday 03 February 2026 06:10:58 +0000 (0:00:01.325) 0:16:11.454 ****** 2026-02-03 06:11:07.841478 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841487 | orchestrator | 2026-02-03 06:11:07.841495 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:11:07.841502 | orchestrator | Tuesday 03 February 2026 06:10:59 +0000 (0:00:01.156) 0:16:12.610 ****** 2026-02-03 06:11:07.841510 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841518 | orchestrator | 2026-02-03 06:11:07.841532 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:11:07.841540 | orchestrator | Tuesday 03 February 2026 06:11:00 +0000 (0:00:01.159) 0:16:13.770 ****** 2026-02-03 06:11:07.841548 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841556 | orchestrator | 2026-02-03 06:11:07.841564 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:11:07.841572 | orchestrator | Tuesday 03 February 2026 06:11:01 +0000 (0:00:01.122) 0:16:14.892 ****** 2026-02-03 06:11:07.841580 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841589 | orchestrator | 2026-02-03 06:11:07.841597 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:11:07.841605 | orchestrator | Tuesday 03 February 2026 06:11:02 +0000 (0:00:01.207) 0:16:16.100 ****** 2026-02-03 06:11:07.841613 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841621 | orchestrator | 2026-02-03 06:11:07.841629 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:11:07.841637 | orchestrator | Tuesday 03 February 2026 06:11:04 +0000 (0:00:01.284) 0:16:17.384 ****** 2026-02-03 06:11:07.841645 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841653 | orchestrator | 2026-02-03 06:11:07.841661 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:11:07.841670 | orchestrator | Tuesday 03 February 2026 06:11:05 +0000 (0:00:01.143) 0:16:18.527 ****** 2026-02-03 06:11:07.841678 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:07.841686 | orchestrator | 2026-02-03 06:11:07.841694 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:11:07.841702 | orchestrator | Tuesday 03 February 2026 06:11:06 +0000 (0:00:01.204) 0:16:19.732 ****** 2026-02-03 06:11:07.841718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:11:09.149687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:11:09.149793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:11:09.149810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:11:09.149843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:11:09.149881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:11:09.149894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:11:09.149930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5699a710', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:11:09.149945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:11:09.149957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:11:09.149982 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:09.149996 | orchestrator | 2026-02-03 06:11:09.150008 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:11:09.150084 | orchestrator | Tuesday 03 February 2026 06:11:07 +0000 (0:00:01.283) 0:16:21.016 ****** 2026-02-03 06:11:09.150099 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:09.150113 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:09.150124 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:09.150145 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:24.245931 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:24.246096 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:24.246145 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:24.246173 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5699a710', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:24.246184 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:24.246197 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:11:24.246211 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:24.246220 | orchestrator | 2026-02-03 06:11:24.246228 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:11:24.246237 | orchestrator | Tuesday 03 February 2026 06:11:09 +0000 (0:00:01.307) 0:16:22.324 ****** 2026-02-03 06:11:24.246244 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:11:24.246253 | orchestrator | 2026-02-03 06:11:24.246260 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:11:24.246267 | orchestrator | Tuesday 03 February 2026 06:11:10 +0000 (0:00:01.601) 0:16:23.926 ****** 2026-02-03 06:11:24.246275 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:11:24.246282 | orchestrator | 2026-02-03 06:11:24.246346 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:11:24.246359 | orchestrator | Tuesday 03 February 2026 06:11:11 +0000 (0:00:01.223) 0:16:25.150 ****** 2026-02-03 06:11:24.246370 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:11:24.246378 | orchestrator | 2026-02-03 06:11:24.246385 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:11:24.246393 | orchestrator | Tuesday 03 February 2026 06:11:13 +0000 (0:00:01.595) 0:16:26.746 ****** 2026-02-03 06:11:24.246400 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:24.246407 | orchestrator | 2026-02-03 06:11:24.246415 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:11:24.246422 | orchestrator | Tuesday 03 February 2026 06:11:14 +0000 (0:00:01.210) 0:16:27.956 ****** 2026-02-03 06:11:24.246429 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:24.246436 | orchestrator | 2026-02-03 06:11:24.246444 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:11:24.246451 | orchestrator | Tuesday 03 February 2026 06:11:16 +0000 (0:00:01.437) 0:16:29.394 ****** 2026-02-03 06:11:24.246458 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:24.246465 | orchestrator | 2026-02-03 06:11:24.246473 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:11:24.246481 | orchestrator | Tuesday 03 February 2026 06:11:17 +0000 (0:00:01.259) 0:16:30.654 ****** 2026-02-03 06:11:24.246490 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-03 06:11:24.246498 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-03 06:11:24.246507 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:11:24.246515 | orchestrator | 2026-02-03 06:11:24.246524 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:11:24.246533 | orchestrator | Tuesday 03 February 2026 06:11:19 +0000 (0:00:02.222) 0:16:32.877 ****** 2026-02-03 06:11:24.246541 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 06:11:24.246551 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 06:11:24.246559 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 06:11:24.246568 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:24.246576 | orchestrator | 2026-02-03 06:11:24.246585 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:11:24.246593 | orchestrator | Tuesday 03 February 2026 06:11:20 +0000 (0:00:01.244) 0:16:34.122 ****** 2026-02-03 06:11:24.246602 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:11:24.246611 | orchestrator | 2026-02-03 06:11:24.246619 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:11:24.246628 | orchestrator | Tuesday 03 February 2026 06:11:22 +0000 (0:00:01.236) 0:16:35.358 ****** 2026-02-03 06:11:24.246643 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:11:24.246653 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:11:24.246662 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:11:24.246670 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:11:24.246679 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:11:24.246695 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:12:05.111209 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:12:05.111399 | orchestrator | 2026-02-03 06:12:05.111414 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:12:05.111423 | orchestrator | Tuesday 03 February 2026 06:11:24 +0000 (0:00:02.059) 0:16:37.418 ****** 2026-02-03 06:12:05.111430 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:12:05.111437 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:12:05.111445 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:12:05.111453 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:12:05.111460 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:12:05.111467 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:12:05.111474 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:12:05.111481 | orchestrator | 2026-02-03 06:12:05.111488 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-03 06:12:05.111494 | orchestrator | Tuesday 03 February 2026 06:11:26 +0000 (0:00:02.494) 0:16:39.913 ****** 2026-02-03 06:12:05.111501 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.111509 | orchestrator | 2026-02-03 06:12:05.111516 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-03 06:12:05.111522 | orchestrator | Tuesday 03 February 2026 06:11:27 +0000 (0:00:00.928) 0:16:40.842 ****** 2026-02-03 06:12:05.111543 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.111550 | orchestrator | 2026-02-03 06:12:05.111556 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-03 06:12:05.111563 | orchestrator | Tuesday 03 February 2026 06:11:28 +0000 (0:00:00.923) 0:16:41.766 ****** 2026-02-03 06:12:05.111570 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.111576 | orchestrator | 2026-02-03 06:12:05.111583 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-03 06:12:05.111590 | orchestrator | Tuesday 03 February 2026 06:11:29 +0000 (0:00:00.791) 0:16:42.557 ****** 2026-02-03 06:12:05.111597 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.111604 | orchestrator | 2026-02-03 06:12:05.111610 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-03 06:12:05.111617 | orchestrator | Tuesday 03 February 2026 06:11:30 +0000 (0:00:00.905) 0:16:43.462 ****** 2026-02-03 06:12:05.111624 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.111630 | orchestrator | 2026-02-03 06:12:05.111637 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-03 06:12:05.111644 | orchestrator | Tuesday 03 February 2026 06:11:31 +0000 (0:00:00.818) 0:16:44.280 ****** 2026-02-03 06:12:05.111651 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 06:12:05.111658 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 06:12:05.111665 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 06:12:05.111671 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.111678 | orchestrator | 2026-02-03 06:12:05.111703 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-03 06:12:05.111710 | orchestrator | Tuesday 03 February 2026 06:11:32 +0000 (0:00:01.161) 0:16:45.442 ****** 2026-02-03 06:12:05.111717 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-03 06:12:05.111724 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-03 06:12:05.111731 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-03 06:12:05.111737 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-03 06:12:05.111744 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-03 06:12:05.111753 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-03 06:12:05.111761 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.111769 | orchestrator | 2026-02-03 06:12:05.111777 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-03 06:12:05.111785 | orchestrator | Tuesday 03 February 2026 06:11:34 +0000 (0:00:01.811) 0:16:47.253 ****** 2026-02-03 06:12:05.111793 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:12:05.111801 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:12:05.111809 | orchestrator | 2026-02-03 06:12:05.111817 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-03 06:12:05.111825 | orchestrator | Tuesday 03 February 2026 06:11:37 +0000 (0:00:03.313) 0:16:50.567 ****** 2026-02-03 06:12:05.111833 | orchestrator | changed: [testbed-node-2] 2026-02-03 06:12:05.111840 | orchestrator | 2026-02-03 06:12:05.111849 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:12:05.111856 | orchestrator | Tuesday 03 February 2026 06:11:39 +0000 (0:00:02.274) 0:16:52.842 ****** 2026-02-03 06:12:05.111864 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-03 06:12:05.111872 | orchestrator | 2026-02-03 06:12:05.111880 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:12:05.111888 | orchestrator | Tuesday 03 February 2026 06:11:40 +0000 (0:00:01.333) 0:16:54.175 ****** 2026-02-03 06:12:05.111897 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-03 06:12:05.111905 | orchestrator | 2026-02-03 06:12:05.111913 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:12:05.111935 | orchestrator | Tuesday 03 February 2026 06:11:42 +0000 (0:00:01.297) 0:16:55.472 ****** 2026-02-03 06:12:05.111943 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:05.111951 | orchestrator | 2026-02-03 06:12:05.111959 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:12:05.111967 | orchestrator | Tuesday 03 February 2026 06:11:43 +0000 (0:00:01.672) 0:16:57.145 ****** 2026-02-03 06:12:05.111975 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.111983 | orchestrator | 2026-02-03 06:12:05.111992 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:12:05.111999 | orchestrator | Tuesday 03 February 2026 06:11:45 +0000 (0:00:01.213) 0:16:58.358 ****** 2026-02-03 06:12:05.112007 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112016 | orchestrator | 2026-02-03 06:12:05.112024 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:12:05.112032 | orchestrator | Tuesday 03 February 2026 06:11:46 +0000 (0:00:01.215) 0:16:59.574 ****** 2026-02-03 06:12:05.112040 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112048 | orchestrator | 2026-02-03 06:12:05.112057 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:12:05.112063 | orchestrator | Tuesday 03 February 2026 06:11:47 +0000 (0:00:01.182) 0:17:00.757 ****** 2026-02-03 06:12:05.112070 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:05.112077 | orchestrator | 2026-02-03 06:12:05.112089 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:12:05.112096 | orchestrator | Tuesday 03 February 2026 06:11:49 +0000 (0:00:01.599) 0:17:02.357 ****** 2026-02-03 06:12:05.112102 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112109 | orchestrator | 2026-02-03 06:12:05.112116 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:12:05.112127 | orchestrator | Tuesday 03 February 2026 06:11:50 +0000 (0:00:01.291) 0:17:03.649 ****** 2026-02-03 06:12:05.112134 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112140 | orchestrator | 2026-02-03 06:12:05.112147 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:12:05.112154 | orchestrator | Tuesday 03 February 2026 06:11:51 +0000 (0:00:01.241) 0:17:04.890 ****** 2026-02-03 06:12:05.112160 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:05.112167 | orchestrator | 2026-02-03 06:12:05.112174 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:12:05.112181 | orchestrator | Tuesday 03 February 2026 06:11:53 +0000 (0:00:01.644) 0:17:06.535 ****** 2026-02-03 06:12:05.112187 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:05.112194 | orchestrator | 2026-02-03 06:12:05.112201 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:12:05.112207 | orchestrator | Tuesday 03 February 2026 06:11:54 +0000 (0:00:01.573) 0:17:08.108 ****** 2026-02-03 06:12:05.112214 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112221 | orchestrator | 2026-02-03 06:12:05.112228 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:12:05.112234 | orchestrator | Tuesday 03 February 2026 06:11:55 +0000 (0:00:00.953) 0:17:09.062 ****** 2026-02-03 06:12:05.112241 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:05.112248 | orchestrator | 2026-02-03 06:12:05.112254 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:12:05.112261 | orchestrator | Tuesday 03 February 2026 06:11:56 +0000 (0:00:00.862) 0:17:09.924 ****** 2026-02-03 06:12:05.112268 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112275 | orchestrator | 2026-02-03 06:12:05.112281 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:12:05.112288 | orchestrator | Tuesday 03 February 2026 06:11:57 +0000 (0:00:00.850) 0:17:10.775 ****** 2026-02-03 06:12:05.112326 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112333 | orchestrator | 2026-02-03 06:12:05.112340 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:12:05.112347 | orchestrator | Tuesday 03 February 2026 06:11:58 +0000 (0:00:00.839) 0:17:11.614 ****** 2026-02-03 06:12:05.112353 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112360 | orchestrator | 2026-02-03 06:12:05.112367 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:12:05.112373 | orchestrator | Tuesday 03 February 2026 06:11:59 +0000 (0:00:00.800) 0:17:12.415 ****** 2026-02-03 06:12:05.112380 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112387 | orchestrator | 2026-02-03 06:12:05.112394 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:12:05.112400 | orchestrator | Tuesday 03 February 2026 06:12:00 +0000 (0:00:00.807) 0:17:13.222 ****** 2026-02-03 06:12:05.112407 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112414 | orchestrator | 2026-02-03 06:12:05.112420 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:12:05.112427 | orchestrator | Tuesday 03 February 2026 06:12:00 +0000 (0:00:00.857) 0:17:14.080 ****** 2026-02-03 06:12:05.112434 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:05.112441 | orchestrator | 2026-02-03 06:12:05.112447 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:12:05.112454 | orchestrator | Tuesday 03 February 2026 06:12:01 +0000 (0:00:00.850) 0:17:14.931 ****** 2026-02-03 06:12:05.112461 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:05.112467 | orchestrator | 2026-02-03 06:12:05.112482 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:12:05.112488 | orchestrator | Tuesday 03 February 2026 06:12:02 +0000 (0:00:00.820) 0:17:15.751 ****** 2026-02-03 06:12:05.112495 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:05.112502 | orchestrator | 2026-02-03 06:12:05.112509 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:12:05.112515 | orchestrator | Tuesday 03 February 2026 06:12:03 +0000 (0:00:00.868) 0:17:16.620 ****** 2026-02-03 06:12:05.112522 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112529 | orchestrator | 2026-02-03 06:12:05.112536 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:12:05.112542 | orchestrator | Tuesday 03 February 2026 06:12:04 +0000 (0:00:00.804) 0:17:17.424 ****** 2026-02-03 06:12:05.112549 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:05.112556 | orchestrator | 2026-02-03 06:12:05.112567 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:12:51.327893 | orchestrator | Tuesday 03 February 2026 06:12:05 +0000 (0:00:00.862) 0:17:18.286 ****** 2026-02-03 06:12:51.328011 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328030 | orchestrator | 2026-02-03 06:12:51.328043 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:12:51.328055 | orchestrator | Tuesday 03 February 2026 06:12:05 +0000 (0:00:00.860) 0:17:19.147 ****** 2026-02-03 06:12:51.328066 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328077 | orchestrator | 2026-02-03 06:12:51.328088 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:12:51.328099 | orchestrator | Tuesday 03 February 2026 06:12:06 +0000 (0:00:00.796) 0:17:19.943 ****** 2026-02-03 06:12:51.328110 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328121 | orchestrator | 2026-02-03 06:12:51.328133 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:12:51.328144 | orchestrator | Tuesday 03 February 2026 06:12:07 +0000 (0:00:00.818) 0:17:20.762 ****** 2026-02-03 06:12:51.328155 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328165 | orchestrator | 2026-02-03 06:12:51.328176 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:12:51.328187 | orchestrator | Tuesday 03 February 2026 06:12:08 +0000 (0:00:00.847) 0:17:21.610 ****** 2026-02-03 06:12:51.328198 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328209 | orchestrator | 2026-02-03 06:12:51.328219 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:12:51.328231 | orchestrator | Tuesday 03 February 2026 06:12:09 +0000 (0:00:00.813) 0:17:22.424 ****** 2026-02-03 06:12:51.328242 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328253 | orchestrator | 2026-02-03 06:12:51.328281 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:12:51.328292 | orchestrator | Tuesday 03 February 2026 06:12:10 +0000 (0:00:00.785) 0:17:23.210 ****** 2026-02-03 06:12:51.328329 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328340 | orchestrator | 2026-02-03 06:12:51.328351 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:12:51.328362 | orchestrator | Tuesday 03 February 2026 06:12:10 +0000 (0:00:00.864) 0:17:24.074 ****** 2026-02-03 06:12:51.328373 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328384 | orchestrator | 2026-02-03 06:12:51.328396 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:12:51.328471 | orchestrator | Tuesday 03 February 2026 06:12:11 +0000 (0:00:00.815) 0:17:24.890 ****** 2026-02-03 06:12:51.328484 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328496 | orchestrator | 2026-02-03 06:12:51.328509 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:12:51.328522 | orchestrator | Tuesday 03 February 2026 06:12:12 +0000 (0:00:00.818) 0:17:25.709 ****** 2026-02-03 06:12:51.328534 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328571 | orchestrator | 2026-02-03 06:12:51.328584 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:12:51.328598 | orchestrator | Tuesday 03 February 2026 06:12:13 +0000 (0:00:00.800) 0:17:26.509 ****** 2026-02-03 06:12:51.328610 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:51.328624 | orchestrator | 2026-02-03 06:12:51.328636 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:12:51.328648 | orchestrator | Tuesday 03 February 2026 06:12:15 +0000 (0:00:01.748) 0:17:28.257 ****** 2026-02-03 06:12:51.328661 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:51.328673 | orchestrator | 2026-02-03 06:12:51.328686 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:12:51.328698 | orchestrator | Tuesday 03 February 2026 06:12:17 +0000 (0:00:02.251) 0:17:30.509 ****** 2026-02-03 06:12:51.328711 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-03 06:12:51.328725 | orchestrator | 2026-02-03 06:12:51.328818 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:12:51.328834 | orchestrator | Tuesday 03 February 2026 06:12:18 +0000 (0:00:01.378) 0:17:31.887 ****** 2026-02-03 06:12:51.328845 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328856 | orchestrator | 2026-02-03 06:12:51.328867 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:12:51.328878 | orchestrator | Tuesday 03 February 2026 06:12:19 +0000 (0:00:01.158) 0:17:33.045 ****** 2026-02-03 06:12:51.328888 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.328899 | orchestrator | 2026-02-03 06:12:51.328910 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:12:51.328921 | orchestrator | Tuesday 03 February 2026 06:12:21 +0000 (0:00:01.180) 0:17:34.225 ****** 2026-02-03 06:12:51.328931 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:12:51.328942 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:12:51.328953 | orchestrator | 2026-02-03 06:12:51.328964 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:12:51.328975 | orchestrator | Tuesday 03 February 2026 06:12:23 +0000 (0:00:01.996) 0:17:36.222 ****** 2026-02-03 06:12:51.328985 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:51.328996 | orchestrator | 2026-02-03 06:12:51.329007 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:12:51.329017 | orchestrator | Tuesday 03 February 2026 06:12:24 +0000 (0:00:01.602) 0:17:37.825 ****** 2026-02-03 06:12:51.329028 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329039 | orchestrator | 2026-02-03 06:12:51.329049 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:12:51.329060 | orchestrator | Tuesday 03 February 2026 06:12:25 +0000 (0:00:01.208) 0:17:39.033 ****** 2026-02-03 06:12:51.329071 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329081 | orchestrator | 2026-02-03 06:12:51.329092 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:12:51.329122 | orchestrator | Tuesday 03 February 2026 06:12:26 +0000 (0:00:00.805) 0:17:39.839 ****** 2026-02-03 06:12:51.329133 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329144 | orchestrator | 2026-02-03 06:12:51.329155 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:12:51.329166 | orchestrator | Tuesday 03 February 2026 06:12:27 +0000 (0:00:00.853) 0:17:40.693 ****** 2026-02-03 06:12:51.329176 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-03 06:12:51.329187 | orchestrator | 2026-02-03 06:12:51.329198 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:12:51.329208 | orchestrator | Tuesday 03 February 2026 06:12:28 +0000 (0:00:01.200) 0:17:41.894 ****** 2026-02-03 06:12:51.329219 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:51.329230 | orchestrator | 2026-02-03 06:12:51.329250 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:12:51.329261 | orchestrator | Tuesday 03 February 2026 06:12:30 +0000 (0:00:01.814) 0:17:43.708 ****** 2026-02-03 06:12:51.329272 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:12:51.329283 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:12:51.329310 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:12:51.329321 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329332 | orchestrator | 2026-02-03 06:12:51.329343 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:12:51.329354 | orchestrator | Tuesday 03 February 2026 06:12:31 +0000 (0:00:01.263) 0:17:44.971 ****** 2026-02-03 06:12:51.329365 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329448 | orchestrator | 2026-02-03 06:12:51.329470 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:12:51.329481 | orchestrator | Tuesday 03 February 2026 06:12:32 +0000 (0:00:01.162) 0:17:46.134 ****** 2026-02-03 06:12:51.329492 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329503 | orchestrator | 2026-02-03 06:12:51.329514 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:12:51.329524 | orchestrator | Tuesday 03 February 2026 06:12:34 +0000 (0:00:01.771) 0:17:47.905 ****** 2026-02-03 06:12:51.329535 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329546 | orchestrator | 2026-02-03 06:12:51.329556 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:12:51.329567 | orchestrator | Tuesday 03 February 2026 06:12:36 +0000 (0:00:01.336) 0:17:49.241 ****** 2026-02-03 06:12:51.329578 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329589 | orchestrator | 2026-02-03 06:12:51.329600 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:12:51.329611 | orchestrator | Tuesday 03 February 2026 06:12:37 +0000 (0:00:01.264) 0:17:50.506 ****** 2026-02-03 06:12:51.329621 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329632 | orchestrator | 2026-02-03 06:12:51.329643 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:12:51.329654 | orchestrator | Tuesday 03 February 2026 06:12:38 +0000 (0:00:00.907) 0:17:51.414 ****** 2026-02-03 06:12:51.329664 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:51.329675 | orchestrator | 2026-02-03 06:12:51.329686 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:12:51.329696 | orchestrator | Tuesday 03 February 2026 06:12:40 +0000 (0:00:02.327) 0:17:53.741 ****** 2026-02-03 06:12:51.329707 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:12:51.329718 | orchestrator | 2026-02-03 06:12:51.329729 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:12:51.329739 | orchestrator | Tuesday 03 February 2026 06:12:41 +0000 (0:00:00.823) 0:17:54.565 ****** 2026-02-03 06:12:51.329750 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-03 06:12:51.329761 | orchestrator | 2026-02-03 06:12:51.329771 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:12:51.329782 | orchestrator | Tuesday 03 February 2026 06:12:42 +0000 (0:00:01.173) 0:17:55.738 ****** 2026-02-03 06:12:51.329793 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329803 | orchestrator | 2026-02-03 06:12:51.329814 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:12:51.329825 | orchestrator | Tuesday 03 February 2026 06:12:43 +0000 (0:00:01.153) 0:17:56.892 ****** 2026-02-03 06:12:51.329836 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329846 | orchestrator | 2026-02-03 06:12:51.329922 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:12:51.329935 | orchestrator | Tuesday 03 February 2026 06:12:44 +0000 (0:00:01.268) 0:17:58.161 ****** 2026-02-03 06:12:51.329955 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.329966 | orchestrator | 2026-02-03 06:12:51.329977 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:12:51.329988 | orchestrator | Tuesday 03 February 2026 06:12:46 +0000 (0:00:01.316) 0:17:59.478 ****** 2026-02-03 06:12:51.329998 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.330009 | orchestrator | 2026-02-03 06:12:51.330094 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:12:51.330115 | orchestrator | Tuesday 03 February 2026 06:12:47 +0000 (0:00:01.258) 0:18:00.736 ****** 2026-02-03 06:12:51.330134 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.330155 | orchestrator | 2026-02-03 06:12:51.330176 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:12:51.330196 | orchestrator | Tuesday 03 February 2026 06:12:48 +0000 (0:00:01.196) 0:18:01.933 ****** 2026-02-03 06:12:51.330210 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.330220 | orchestrator | 2026-02-03 06:12:51.330231 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:12:51.330242 | orchestrator | Tuesday 03 February 2026 06:12:50 +0000 (0:00:01.349) 0:18:03.282 ****** 2026-02-03 06:12:51.330253 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:12:51.330263 | orchestrator | 2026-02-03 06:12:51.330285 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:13:27.279274 | orchestrator | Tuesday 03 February 2026 06:12:51 +0000 (0:00:01.217) 0:18:04.500 ****** 2026-02-03 06:13:27.279413 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279426 | orchestrator | 2026-02-03 06:13:27.279435 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:13:27.279442 | orchestrator | Tuesday 03 February 2026 06:12:52 +0000 (0:00:01.224) 0:18:05.724 ****** 2026-02-03 06:13:27.279450 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:13:27.279459 | orchestrator | 2026-02-03 06:13:27.279466 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:13:27.279474 | orchestrator | Tuesday 03 February 2026 06:12:53 +0000 (0:00:00.827) 0:18:06.552 ****** 2026-02-03 06:13:27.279482 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-03 06:13:27.279490 | orchestrator | 2026-02-03 06:13:27.279498 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:13:27.279505 | orchestrator | Tuesday 03 February 2026 06:12:54 +0000 (0:00:01.179) 0:18:07.732 ****** 2026-02-03 06:13:27.279512 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-03 06:13:27.279520 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-03 06:13:27.279527 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-03 06:13:27.279534 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-03 06:13:27.279541 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-03 06:13:27.279548 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-03 06:13:27.279555 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-03 06:13:27.279577 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:13:27.279585 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:13:27.279592 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:13:27.279599 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:13:27.279606 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:13:27.279613 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:13:27.279620 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:13:27.279627 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-03 06:13:27.279634 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-03 06:13:27.279641 | orchestrator | 2026-02-03 06:13:27.279666 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:13:27.279674 | orchestrator | Tuesday 03 February 2026 06:13:01 +0000 (0:00:07.069) 0:18:14.801 ****** 2026-02-03 06:13:27.279681 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279688 | orchestrator | 2026-02-03 06:13:27.279695 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:13:27.279702 | orchestrator | Tuesday 03 February 2026 06:13:02 +0000 (0:00:00.836) 0:18:15.638 ****** 2026-02-03 06:13:27.279708 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279715 | orchestrator | 2026-02-03 06:13:27.279721 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:13:27.279728 | orchestrator | Tuesday 03 February 2026 06:13:03 +0000 (0:00:00.808) 0:18:16.447 ****** 2026-02-03 06:13:27.279734 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279740 | orchestrator | 2026-02-03 06:13:27.279747 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:13:27.279753 | orchestrator | Tuesday 03 February 2026 06:13:04 +0000 (0:00:00.791) 0:18:17.238 ****** 2026-02-03 06:13:27.279760 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279766 | orchestrator | 2026-02-03 06:13:27.279773 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:13:27.279779 | orchestrator | Tuesday 03 February 2026 06:13:04 +0000 (0:00:00.829) 0:18:18.068 ****** 2026-02-03 06:13:27.279785 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279792 | orchestrator | 2026-02-03 06:13:27.279799 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:13:27.279805 | orchestrator | Tuesday 03 February 2026 06:13:05 +0000 (0:00:00.919) 0:18:18.988 ****** 2026-02-03 06:13:27.279812 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279819 | orchestrator | 2026-02-03 06:13:27.279826 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:13:27.279834 | orchestrator | Tuesday 03 February 2026 06:13:06 +0000 (0:00:00.868) 0:18:19.857 ****** 2026-02-03 06:13:27.279840 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279848 | orchestrator | 2026-02-03 06:13:27.279855 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:13:27.279862 | orchestrator | Tuesday 03 February 2026 06:13:07 +0000 (0:00:00.831) 0:18:20.689 ****** 2026-02-03 06:13:27.279869 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279875 | orchestrator | 2026-02-03 06:13:27.279883 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:13:27.279890 | orchestrator | Tuesday 03 February 2026 06:13:08 +0000 (0:00:01.036) 0:18:21.725 ****** 2026-02-03 06:13:27.279897 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279903 | orchestrator | 2026-02-03 06:13:27.279910 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:13:27.279918 | orchestrator | Tuesday 03 February 2026 06:13:09 +0000 (0:00:00.829) 0:18:22.555 ****** 2026-02-03 06:13:27.279925 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279931 | orchestrator | 2026-02-03 06:13:27.279938 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:13:27.279946 | orchestrator | Tuesday 03 February 2026 06:13:10 +0000 (0:00:00.825) 0:18:23.380 ****** 2026-02-03 06:13:27.279953 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.279960 | orchestrator | 2026-02-03 06:13:27.279985 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:13:27.279993 | orchestrator | Tuesday 03 February 2026 06:13:11 +0000 (0:00:00.858) 0:18:24.239 ****** 2026-02-03 06:13:27.280000 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280007 | orchestrator | 2026-02-03 06:13:27.280014 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:13:27.280021 | orchestrator | Tuesday 03 February 2026 06:13:11 +0000 (0:00:00.786) 0:18:25.025 ****** 2026-02-03 06:13:27.280034 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280041 | orchestrator | 2026-02-03 06:13:27.280047 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:13:27.280053 | orchestrator | Tuesday 03 February 2026 06:13:12 +0000 (0:00:00.933) 0:18:25.958 ****** 2026-02-03 06:13:27.280060 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280066 | orchestrator | 2026-02-03 06:13:27.280072 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:13:27.280079 | orchestrator | Tuesday 03 February 2026 06:13:13 +0000 (0:00:00.821) 0:18:26.780 ****** 2026-02-03 06:13:27.280085 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280091 | orchestrator | 2026-02-03 06:13:27.280098 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:13:27.280104 | orchestrator | Tuesday 03 February 2026 06:13:14 +0000 (0:00:00.952) 0:18:27.733 ****** 2026-02-03 06:13:27.280111 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280118 | orchestrator | 2026-02-03 06:13:27.280124 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:13:27.280131 | orchestrator | Tuesday 03 February 2026 06:13:15 +0000 (0:00:00.907) 0:18:28.640 ****** 2026-02-03 06:13:27.280142 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280149 | orchestrator | 2026-02-03 06:13:27.280156 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:13:27.280163 | orchestrator | Tuesday 03 February 2026 06:13:16 +0000 (0:00:00.815) 0:18:29.456 ****** 2026-02-03 06:13:27.280170 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280176 | orchestrator | 2026-02-03 06:13:27.280183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:13:27.280189 | orchestrator | Tuesday 03 February 2026 06:13:17 +0000 (0:00:00.821) 0:18:30.277 ****** 2026-02-03 06:13:27.280195 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280202 | orchestrator | 2026-02-03 06:13:27.280208 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:13:27.280215 | orchestrator | Tuesday 03 February 2026 06:13:18 +0000 (0:00:00.961) 0:18:31.238 ****** 2026-02-03 06:13:27.280221 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280228 | orchestrator | 2026-02-03 06:13:27.280234 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:13:27.280241 | orchestrator | Tuesday 03 February 2026 06:13:18 +0000 (0:00:00.792) 0:18:32.030 ****** 2026-02-03 06:13:27.280248 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280254 | orchestrator | 2026-02-03 06:13:27.280261 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:13:27.280267 | orchestrator | Tuesday 03 February 2026 06:13:19 +0000 (0:00:00.822) 0:18:32.853 ****** 2026-02-03 06:13:27.280273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 06:13:27.280280 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 06:13:27.280286 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 06:13:27.280293 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280326 | orchestrator | 2026-02-03 06:13:27.280332 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:13:27.280338 | orchestrator | Tuesday 03 February 2026 06:13:20 +0000 (0:00:01.188) 0:18:34.042 ****** 2026-02-03 06:13:27.280344 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 06:13:27.280350 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 06:13:27.280356 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 06:13:27.280363 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280368 | orchestrator | 2026-02-03 06:13:27.280374 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:13:27.280380 | orchestrator | Tuesday 03 February 2026 06:13:21 +0000 (0:00:01.126) 0:18:35.168 ****** 2026-02-03 06:13:27.280391 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 06:13:27.280397 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 06:13:27.280403 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 06:13:27.280409 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280416 | orchestrator | 2026-02-03 06:13:27.280422 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:13:27.280429 | orchestrator | Tuesday 03 February 2026 06:13:23 +0000 (0:00:01.146) 0:18:36.315 ****** 2026-02-03 06:13:27.280436 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280442 | orchestrator | 2026-02-03 06:13:27.280448 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:13:27.280454 | orchestrator | Tuesday 03 February 2026 06:13:23 +0000 (0:00:00.798) 0:18:37.113 ****** 2026-02-03 06:13:27.280462 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-03 06:13:27.280468 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:13:27.280475 | orchestrator | 2026-02-03 06:13:27.280482 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:13:27.280489 | orchestrator | Tuesday 03 February 2026 06:13:24 +0000 (0:00:00.953) 0:18:38.067 ****** 2026-02-03 06:13:27.280495 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:13:27.280502 | orchestrator | 2026-02-03 06:13:27.280508 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:13:27.280514 | orchestrator | Tuesday 03 February 2026 06:13:26 +0000 (0:00:01.497) 0:18:39.565 ****** 2026-02-03 06:13:27.280520 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:13:27.280525 | orchestrator | 2026-02-03 06:13:27.280538 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-03 06:14:53.204493 | orchestrator | Tuesday 03 February 2026 06:13:27 +0000 (0:00:00.886) 0:18:40.451 ****** 2026-02-03 06:14:53.204611 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-03 06:14:53.204630 | orchestrator | 2026-02-03 06:14:53.204644 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-03 06:14:53.204655 | orchestrator | Tuesday 03 February 2026 06:13:28 +0000 (0:00:01.420) 0:18:41.872 ****** 2026-02-03 06:14:53.204666 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.204679 | orchestrator | 2026-02-03 06:14:53.204691 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-03 06:14:53.204703 | orchestrator | Tuesday 03 February 2026 06:13:32 +0000 (0:00:03.377) 0:18:45.249 ****** 2026-02-03 06:14:53.204714 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:14:53.204726 | orchestrator | 2026-02-03 06:14:53.204738 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-03 06:14:53.204749 | orchestrator | Tuesday 03 February 2026 06:13:33 +0000 (0:00:01.334) 0:18:46.584 ****** 2026-02-03 06:14:53.204760 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.204771 | orchestrator | 2026-02-03 06:14:53.204782 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-03 06:14:53.204793 | orchestrator | Tuesday 03 February 2026 06:13:34 +0000 (0:00:01.218) 0:18:47.802 ****** 2026-02-03 06:14:53.204804 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.204816 | orchestrator | 2026-02-03 06:14:53.204827 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-03 06:14:53.204838 | orchestrator | Tuesday 03 February 2026 06:13:35 +0000 (0:00:01.236) 0:18:49.039 ****** 2026-02-03 06:14:53.204867 | orchestrator | changed: [testbed-node-2] 2026-02-03 06:14:53.204879 | orchestrator | 2026-02-03 06:14:53.204890 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-03 06:14:53.204906 | orchestrator | Tuesday 03 February 2026 06:13:37 +0000 (0:00:02.140) 0:18:51.180 ****** 2026-02-03 06:14:53.204924 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.204943 | orchestrator | 2026-02-03 06:14:53.204963 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-03 06:14:53.205012 | orchestrator | Tuesday 03 February 2026 06:13:39 +0000 (0:00:01.631) 0:18:52.812 ****** 2026-02-03 06:14:53.205033 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.205052 | orchestrator | 2026-02-03 06:14:53.205071 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-03 06:14:53.205090 | orchestrator | Tuesday 03 February 2026 06:13:41 +0000 (0:00:01.559) 0:18:54.371 ****** 2026-02-03 06:14:53.205109 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.205129 | orchestrator | 2026-02-03 06:14:53.205150 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-03 06:14:53.205171 | orchestrator | Tuesday 03 February 2026 06:13:42 +0000 (0:00:01.579) 0:18:55.950 ****** 2026-02-03 06:14:53.205192 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:14:53.205212 | orchestrator | 2026-02-03 06:14:53.205226 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-03 06:14:53.205240 | orchestrator | Tuesday 03 February 2026 06:13:44 +0000 (0:00:01.678) 0:18:57.629 ****** 2026-02-03 06:14:53.205252 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:14:53.205265 | orchestrator | 2026-02-03 06:14:53.205278 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-03 06:14:53.205291 | orchestrator | Tuesday 03 February 2026 06:13:46 +0000 (0:00:01.708) 0:18:59.337 ****** 2026-02-03 06:14:53.205304 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:14:53.205351 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-03 06:14:53.205365 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-03 06:14:53.205376 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-03 06:14:53.205387 | orchestrator | 2026-02-03 06:14:53.205398 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-03 06:14:53.205409 | orchestrator | Tuesday 03 February 2026 06:13:50 +0000 (0:00:04.055) 0:19:03.393 ****** 2026-02-03 06:14:53.205420 | orchestrator | changed: [testbed-node-2] 2026-02-03 06:14:53.205430 | orchestrator | 2026-02-03 06:14:53.205441 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-03 06:14:53.205452 | orchestrator | Tuesday 03 February 2026 06:13:52 +0000 (0:00:02.231) 0:19:05.624 ****** 2026-02-03 06:14:53.205463 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.205473 | orchestrator | 2026-02-03 06:14:53.205484 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-03 06:14:53.205495 | orchestrator | Tuesday 03 February 2026 06:13:53 +0000 (0:00:01.204) 0:19:06.829 ****** 2026-02-03 06:14:53.205506 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.205516 | orchestrator | 2026-02-03 06:14:53.205527 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-03 06:14:53.205538 | orchestrator | Tuesday 03 February 2026 06:13:54 +0000 (0:00:01.262) 0:19:08.091 ****** 2026-02-03 06:14:53.205548 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.205559 | orchestrator | 2026-02-03 06:14:53.205570 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-03 06:14:53.205581 | orchestrator | Tuesday 03 February 2026 06:13:56 +0000 (0:00:02.040) 0:19:10.132 ****** 2026-02-03 06:14:53.205591 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.205602 | orchestrator | 2026-02-03 06:14:53.205613 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-03 06:14:53.205623 | orchestrator | Tuesday 03 February 2026 06:13:58 +0000 (0:00:01.602) 0:19:11.735 ****** 2026-02-03 06:14:53.205634 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:14:53.205645 | orchestrator | 2026-02-03 06:14:53.205656 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-03 06:14:53.205666 | orchestrator | Tuesday 03 February 2026 06:13:59 +0000 (0:00:00.965) 0:19:12.700 ****** 2026-02-03 06:14:53.205677 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-03 06:14:53.205688 | orchestrator | 2026-02-03 06:14:53.205719 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-03 06:14:53.205742 | orchestrator | Tuesday 03 February 2026 06:14:00 +0000 (0:00:01.214) 0:19:13.915 ****** 2026-02-03 06:14:53.205753 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:14:53.205764 | orchestrator | 2026-02-03 06:14:53.205775 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-03 06:14:53.205786 | orchestrator | Tuesday 03 February 2026 06:14:01 +0000 (0:00:01.249) 0:19:15.165 ****** 2026-02-03 06:14:53.205797 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:14:53.205808 | orchestrator | 2026-02-03 06:14:53.205818 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-03 06:14:53.205829 | orchestrator | Tuesday 03 February 2026 06:14:03 +0000 (0:00:01.180) 0:19:16.345 ****** 2026-02-03 06:14:53.205840 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-03 06:14:53.205851 | orchestrator | 2026-02-03 06:14:53.205862 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-03 06:14:53.205873 | orchestrator | Tuesday 03 February 2026 06:14:04 +0000 (0:00:01.156) 0:19:17.501 ****** 2026-02-03 06:14:53.205884 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.205895 | orchestrator | 2026-02-03 06:14:53.205906 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-03 06:14:53.205917 | orchestrator | Tuesday 03 February 2026 06:14:07 +0000 (0:00:02.790) 0:19:20.291 ****** 2026-02-03 06:14:53.205927 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.205938 | orchestrator | 2026-02-03 06:14:53.205949 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-03 06:14:53.205968 | orchestrator | Tuesday 03 February 2026 06:14:09 +0000 (0:00:02.010) 0:19:22.302 ****** 2026-02-03 06:14:53.205980 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.206000 | orchestrator | 2026-02-03 06:14:53.206088 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-03 06:14:53.206109 | orchestrator | Tuesday 03 February 2026 06:14:11 +0000 (0:00:02.566) 0:19:24.868 ****** 2026-02-03 06:14:53.206129 | orchestrator | changed: [testbed-node-2] 2026-02-03 06:14:53.206148 | orchestrator | 2026-02-03 06:14:53.206166 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-03 06:14:53.206185 | orchestrator | Tuesday 03 February 2026 06:14:14 +0000 (0:00:03.261) 0:19:28.129 ****** 2026-02-03 06:14:53.206204 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-03 06:14:53.206218 | orchestrator | 2026-02-03 06:14:53.206229 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-03 06:14:53.206240 | orchestrator | Tuesday 03 February 2026 06:14:16 +0000 (0:00:01.334) 0:19:29.464 ****** 2026-02-03 06:14:53.206250 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-03 06:14:53.206262 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.206273 | orchestrator | 2026-02-03 06:14:53.206283 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-03 06:14:53.206294 | orchestrator | Tuesday 03 February 2026 06:14:39 +0000 (0:00:23.221) 0:19:52.685 ****** 2026-02-03 06:14:53.206305 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:14:53.206337 | orchestrator | 2026-02-03 06:14:53.206349 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-03 06:14:53.206360 | orchestrator | Tuesday 03 February 2026 06:14:42 +0000 (0:00:02.898) 0:19:55.584 ****** 2026-02-03 06:14:53.206371 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:14:53.206382 | orchestrator | 2026-02-03 06:14:53.206393 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-03 06:14:53.206404 | orchestrator | Tuesday 03 February 2026 06:14:43 +0000 (0:00:00.798) 0:19:56.383 ****** 2026-02-03 06:14:53.206417 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-03 06:14:53.206441 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-03 06:14:53.206453 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-03 06:14:53.206464 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-03 06:14:53.206488 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-03 06:15:40.066614 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__66d9d5c0a411652b15952a056b02e5f2a47ac31f'}])  2026-02-03 06:15:40.066728 | orchestrator | 2026-02-03 06:15:40.066747 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-03 06:15:40.066760 | orchestrator | Tuesday 03 February 2026 06:14:53 +0000 (0:00:09.995) 0:20:06.378 ****** 2026-02-03 06:15:40.066772 | orchestrator | changed: [testbed-node-2] 2026-02-03 06:15:40.066784 | orchestrator | 2026-02-03 06:15:40.066795 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:15:40.066806 | orchestrator | Tuesday 03 February 2026 06:14:55 +0000 (0:00:02.298) 0:20:08.677 ****** 2026-02-03 06:15:40.066834 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:15:40.066846 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-03 06:15:40.066857 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-03 06:15:40.066868 | orchestrator | 2026-02-03 06:15:40.066879 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:15:40.066890 | orchestrator | Tuesday 03 February 2026 06:14:57 +0000 (0:00:01.999) 0:20:10.676 ****** 2026-02-03 06:15:40.066902 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 06:15:40.066913 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 06:15:40.066924 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 06:15:40.066935 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:15:40.066946 | orchestrator | 2026-02-03 06:15:40.066956 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-03 06:15:40.066968 | orchestrator | Tuesday 03 February 2026 06:14:58 +0000 (0:00:01.137) 0:20:11.813 ****** 2026-02-03 06:15:40.066978 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:15:40.067016 | orchestrator | 2026-02-03 06:15:40.067028 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-03 06:15:40.067039 | orchestrator | Tuesday 03 February 2026 06:14:59 +0000 (0:00:00.832) 0:20:12.646 ****** 2026-02-03 06:15:40.067050 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:15:40.067062 | orchestrator | 2026-02-03 06:15:40.067073 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-03 06:15:40.067083 | orchestrator | 2026-02-03 06:15:40.067094 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-03 06:15:40.067105 | orchestrator | Tuesday 03 February 2026 06:15:03 +0000 (0:00:03.594) 0:20:16.241 ****** 2026-02-03 06:15:40.067116 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:15:40.067127 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:15:40.067139 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:15:40.067153 | orchestrator | 2026-02-03 06:15:40.067166 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-03 06:15:40.067178 | orchestrator | 2026-02-03 06:15:40.067191 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-03 06:15:40.067203 | orchestrator | Tuesday 03 February 2026 06:15:05 +0000 (0:00:02.004) 0:20:18.246 ****** 2026-02-03 06:15:40.067216 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067230 | orchestrator | 2026-02-03 06:15:40.067243 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:15:40.067256 | orchestrator | Tuesday 03 February 2026 06:15:06 +0000 (0:00:01.251) 0:20:19.497 ****** 2026-02-03 06:15:40.067268 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067281 | orchestrator | 2026-02-03 06:15:40.067321 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:15:40.067334 | orchestrator | Tuesday 03 February 2026 06:15:07 +0000 (0:00:01.241) 0:20:20.739 ****** 2026-02-03 06:15:40.067347 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067360 | orchestrator | 2026-02-03 06:15:40.067372 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:15:40.067384 | orchestrator | Tuesday 03 February 2026 06:15:08 +0000 (0:00:01.169) 0:20:21.908 ****** 2026-02-03 06:15:40.067397 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067410 | orchestrator | 2026-02-03 06:15:40.067423 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:15:40.067436 | orchestrator | Tuesday 03 February 2026 06:15:09 +0000 (0:00:01.172) 0:20:23.081 ****** 2026-02-03 06:15:40.067448 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067460 | orchestrator | 2026-02-03 06:15:40.067473 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:15:40.067486 | orchestrator | Tuesday 03 February 2026 06:15:11 +0000 (0:00:01.273) 0:20:24.355 ****** 2026-02-03 06:15:40.067498 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067509 | orchestrator | 2026-02-03 06:15:40.067519 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:15:40.067530 | orchestrator | Tuesday 03 February 2026 06:15:12 +0000 (0:00:01.191) 0:20:25.546 ****** 2026-02-03 06:15:40.067541 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067551 | orchestrator | 2026-02-03 06:15:40.067562 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:15:40.067573 | orchestrator | Tuesday 03 February 2026 06:15:13 +0000 (0:00:01.173) 0:20:26.720 ****** 2026-02-03 06:15:40.067584 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067594 | orchestrator | 2026-02-03 06:15:40.067605 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:15:40.067616 | orchestrator | Tuesday 03 February 2026 06:15:14 +0000 (0:00:01.187) 0:20:27.908 ****** 2026-02-03 06:15:40.067644 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067655 | orchestrator | 2026-02-03 06:15:40.067666 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:15:40.067677 | orchestrator | Tuesday 03 February 2026 06:15:15 +0000 (0:00:01.246) 0:20:29.154 ****** 2026-02-03 06:15:40.067696 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067708 | orchestrator | 2026-02-03 06:15:40.067719 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:15:40.067730 | orchestrator | Tuesday 03 February 2026 06:15:17 +0000 (0:00:01.268) 0:20:30.422 ****** 2026-02-03 06:15:40.067741 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067752 | orchestrator | 2026-02-03 06:15:40.067763 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:15:40.067773 | orchestrator | Tuesday 03 February 2026 06:15:18 +0000 (0:00:01.182) 0:20:31.605 ****** 2026-02-03 06:15:40.067784 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067795 | orchestrator | 2026-02-03 06:15:40.067806 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:15:40.067816 | orchestrator | Tuesday 03 February 2026 06:15:19 +0000 (0:00:01.228) 0:20:32.834 ****** 2026-02-03 06:15:40.067827 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067838 | orchestrator | 2026-02-03 06:15:40.067854 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:15:40.067866 | orchestrator | Tuesday 03 February 2026 06:15:20 +0000 (0:00:01.258) 0:20:34.092 ****** 2026-02-03 06:15:40.067877 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067887 | orchestrator | 2026-02-03 06:15:40.067898 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:15:40.067909 | orchestrator | Tuesday 03 February 2026 06:15:22 +0000 (0:00:01.189) 0:20:35.281 ****** 2026-02-03 06:15:40.067920 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067931 | orchestrator | 2026-02-03 06:15:40.067941 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:15:40.067952 | orchestrator | Tuesday 03 February 2026 06:15:23 +0000 (0:00:01.158) 0:20:36.440 ****** 2026-02-03 06:15:40.067963 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.067973 | orchestrator | 2026-02-03 06:15:40.067984 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:15:40.068010 | orchestrator | Tuesday 03 February 2026 06:15:24 +0000 (0:00:01.195) 0:20:37.636 ****** 2026-02-03 06:15:40.068021 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068042 | orchestrator | 2026-02-03 06:15:40.068053 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:15:40.068064 | orchestrator | Tuesday 03 February 2026 06:15:25 +0000 (0:00:01.247) 0:20:38.883 ****** 2026-02-03 06:15:40.068075 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068086 | orchestrator | 2026-02-03 06:15:40.068097 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:15:40.068108 | orchestrator | Tuesday 03 February 2026 06:15:26 +0000 (0:00:01.203) 0:20:40.086 ****** 2026-02-03 06:15:40.068118 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068129 | orchestrator | 2026-02-03 06:15:40.068140 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:15:40.068151 | orchestrator | Tuesday 03 February 2026 06:15:28 +0000 (0:00:01.181) 0:20:41.268 ****** 2026-02-03 06:15:40.068161 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068172 | orchestrator | 2026-02-03 06:15:40.068183 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:15:40.068194 | orchestrator | Tuesday 03 February 2026 06:15:29 +0000 (0:00:01.197) 0:20:42.465 ****** 2026-02-03 06:15:40.068205 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068215 | orchestrator | 2026-02-03 06:15:40.068226 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:15:40.068237 | orchestrator | Tuesday 03 February 2026 06:15:30 +0000 (0:00:01.146) 0:20:43.612 ****** 2026-02-03 06:15:40.068248 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068258 | orchestrator | 2026-02-03 06:15:40.068269 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:15:40.068307 | orchestrator | Tuesday 03 February 2026 06:15:31 +0000 (0:00:01.180) 0:20:44.792 ****** 2026-02-03 06:15:40.068321 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068332 | orchestrator | 2026-02-03 06:15:40.068343 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:15:40.068354 | orchestrator | Tuesday 03 February 2026 06:15:32 +0000 (0:00:01.230) 0:20:46.023 ****** 2026-02-03 06:15:40.068365 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068376 | orchestrator | 2026-02-03 06:15:40.068387 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:15:40.068398 | orchestrator | Tuesday 03 February 2026 06:15:34 +0000 (0:00:01.208) 0:20:47.232 ****** 2026-02-03 06:15:40.068408 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068419 | orchestrator | 2026-02-03 06:15:40.068430 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:15:40.068441 | orchestrator | Tuesday 03 February 2026 06:15:35 +0000 (0:00:01.246) 0:20:48.478 ****** 2026-02-03 06:15:40.068452 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068463 | orchestrator | 2026-02-03 06:15:40.068473 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:15:40.068484 | orchestrator | Tuesday 03 February 2026 06:15:36 +0000 (0:00:01.186) 0:20:49.664 ****** 2026-02-03 06:15:40.068495 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068506 | orchestrator | 2026-02-03 06:15:40.068517 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:15:40.068527 | orchestrator | Tuesday 03 February 2026 06:15:37 +0000 (0:00:01.184) 0:20:50.849 ****** 2026-02-03 06:15:40.068538 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068549 | orchestrator | 2026-02-03 06:15:40.068559 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:15:40.068570 | orchestrator | Tuesday 03 February 2026 06:15:38 +0000 (0:00:01.162) 0:20:52.012 ****** 2026-02-03 06:15:40.068581 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:15:40.068592 | orchestrator | 2026-02-03 06:15:40.068609 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:16:26.474659 | orchestrator | Tuesday 03 February 2026 06:15:40 +0000 (0:00:01.224) 0:20:53.236 ****** 2026-02-03 06:16:26.474783 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.474801 | orchestrator | 2026-02-03 06:16:26.474813 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:16:26.474826 | orchestrator | Tuesday 03 February 2026 06:15:41 +0000 (0:00:01.262) 0:20:54.499 ****** 2026-02-03 06:16:26.474837 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.474851 | orchestrator | 2026-02-03 06:16:26.474870 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:16:26.474883 | orchestrator | Tuesday 03 February 2026 06:15:42 +0000 (0:00:01.195) 0:20:55.694 ****** 2026-02-03 06:16:26.474894 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.474905 | orchestrator | 2026-02-03 06:16:26.474916 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:16:26.474927 | orchestrator | Tuesday 03 February 2026 06:15:43 +0000 (0:00:01.298) 0:20:56.993 ****** 2026-02-03 06:16:26.474938 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.474949 | orchestrator | 2026-02-03 06:16:26.474960 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:16:26.474988 | orchestrator | Tuesday 03 February 2026 06:15:45 +0000 (0:00:01.217) 0:20:58.211 ****** 2026-02-03 06:16:26.475000 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475011 | orchestrator | 2026-02-03 06:16:26.475022 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:16:26.475033 | orchestrator | Tuesday 03 February 2026 06:15:46 +0000 (0:00:01.344) 0:20:59.556 ****** 2026-02-03 06:16:26.475044 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475054 | orchestrator | 2026-02-03 06:16:26.475065 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:16:26.475101 | orchestrator | Tuesday 03 February 2026 06:15:47 +0000 (0:00:01.240) 0:21:00.797 ****** 2026-02-03 06:16:26.475113 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475123 | orchestrator | 2026-02-03 06:16:26.475134 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:16:26.475145 | orchestrator | Tuesday 03 February 2026 06:15:48 +0000 (0:00:01.206) 0:21:02.003 ****** 2026-02-03 06:16:26.475161 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475178 | orchestrator | 2026-02-03 06:16:26.475191 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:16:26.475205 | orchestrator | Tuesday 03 February 2026 06:15:49 +0000 (0:00:01.172) 0:21:03.176 ****** 2026-02-03 06:16:26.475217 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475229 | orchestrator | 2026-02-03 06:16:26.475243 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:16:26.475279 | orchestrator | Tuesday 03 February 2026 06:15:51 +0000 (0:00:01.193) 0:21:04.370 ****** 2026-02-03 06:16:26.475291 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475304 | orchestrator | 2026-02-03 06:16:26.475317 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:16:26.475331 | orchestrator | Tuesday 03 February 2026 06:15:52 +0000 (0:00:01.304) 0:21:05.675 ****** 2026-02-03 06:16:26.475344 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475358 | orchestrator | 2026-02-03 06:16:26.475378 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:16:26.475397 | orchestrator | Tuesday 03 February 2026 06:15:53 +0000 (0:00:01.215) 0:21:06.890 ****** 2026-02-03 06:16:26.475415 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475433 | orchestrator | 2026-02-03 06:16:26.475451 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:16:26.475470 | orchestrator | Tuesday 03 February 2026 06:15:54 +0000 (0:00:01.206) 0:21:08.097 ****** 2026-02-03 06:16:26.475488 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475506 | orchestrator | 2026-02-03 06:16:26.475526 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:16:26.475545 | orchestrator | Tuesday 03 February 2026 06:15:56 +0000 (0:00:01.248) 0:21:09.346 ****** 2026-02-03 06:16:26.475563 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475581 | orchestrator | 2026-02-03 06:16:26.475592 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:16:26.475603 | orchestrator | Tuesday 03 February 2026 06:15:57 +0000 (0:00:01.165) 0:21:10.511 ****** 2026-02-03 06:16:26.475614 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475625 | orchestrator | 2026-02-03 06:16:26.475636 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:16:26.475646 | orchestrator | Tuesday 03 February 2026 06:15:58 +0000 (0:00:01.201) 0:21:11.713 ****** 2026-02-03 06:16:26.475657 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475671 | orchestrator | 2026-02-03 06:16:26.475690 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:16:26.475707 | orchestrator | Tuesday 03 February 2026 06:15:59 +0000 (0:00:01.226) 0:21:12.939 ****** 2026-02-03 06:16:26.475725 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475744 | orchestrator | 2026-02-03 06:16:26.475762 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:16:26.475782 | orchestrator | Tuesday 03 February 2026 06:16:01 +0000 (0:00:01.390) 0:21:14.330 ****** 2026-02-03 06:16:26.475800 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475817 | orchestrator | 2026-02-03 06:16:26.475835 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:16:26.475853 | orchestrator | Tuesday 03 February 2026 06:16:02 +0000 (0:00:01.212) 0:21:15.542 ****** 2026-02-03 06:16:26.475870 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475902 | orchestrator | 2026-02-03 06:16:26.475921 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:16:26.475940 | orchestrator | Tuesday 03 February 2026 06:16:03 +0000 (0:00:01.291) 0:21:16.834 ****** 2026-02-03 06:16:26.475959 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.475978 | orchestrator | 2026-02-03 06:16:26.476024 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:16:26.476046 | orchestrator | Tuesday 03 February 2026 06:16:04 +0000 (0:00:01.205) 0:21:18.040 ****** 2026-02-03 06:16:26.476064 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476083 | orchestrator | 2026-02-03 06:16:26.476102 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:16:26.476122 | orchestrator | Tuesday 03 February 2026 06:16:06 +0000 (0:00:01.307) 0:21:19.347 ****** 2026-02-03 06:16:26.476141 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476157 | orchestrator | 2026-02-03 06:16:26.476168 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:16:26.476179 | orchestrator | Tuesday 03 February 2026 06:16:07 +0000 (0:00:01.215) 0:21:20.563 ****** 2026-02-03 06:16:26.476189 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476200 | orchestrator | 2026-02-03 06:16:26.476211 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:16:26.476222 | orchestrator | Tuesday 03 February 2026 06:16:08 +0000 (0:00:01.144) 0:21:21.708 ****** 2026-02-03 06:16:26.476233 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476243 | orchestrator | 2026-02-03 06:16:26.476316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:16:26.476329 | orchestrator | Tuesday 03 February 2026 06:16:09 +0000 (0:00:01.167) 0:21:22.876 ****** 2026-02-03 06:16:26.476340 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476351 | orchestrator | 2026-02-03 06:16:26.476361 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:16:26.476372 | orchestrator | Tuesday 03 February 2026 06:16:10 +0000 (0:00:01.275) 0:21:24.151 ****** 2026-02-03 06:16:26.476383 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 06:16:26.476394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:16:26.476405 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:16:26.476416 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476427 | orchestrator | 2026-02-03 06:16:26.476437 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:16:26.476448 | orchestrator | Tuesday 03 February 2026 06:16:12 +0000 (0:00:01.915) 0:21:26.067 ****** 2026-02-03 06:16:26.476459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 06:16:26.476470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:16:26.476481 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:16:26.476491 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476502 | orchestrator | 2026-02-03 06:16:26.476513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:16:26.476524 | orchestrator | Tuesday 03 February 2026 06:16:14 +0000 (0:00:01.558) 0:21:27.626 ****** 2026-02-03 06:16:26.476534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 06:16:26.476545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:16:26.476556 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:16:26.476566 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476577 | orchestrator | 2026-02-03 06:16:26.476588 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:16:26.476599 | orchestrator | Tuesday 03 February 2026 06:16:16 +0000 (0:00:01.596) 0:21:29.222 ****** 2026-02-03 06:16:26.476609 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476620 | orchestrator | 2026-02-03 06:16:26.476641 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:16:26.476652 | orchestrator | Tuesday 03 February 2026 06:16:17 +0000 (0:00:01.210) 0:21:30.434 ****** 2026-02-03 06:16:26.476663 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-03 06:16:26.476673 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476684 | orchestrator | 2026-02-03 06:16:26.476695 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:16:26.476706 | orchestrator | Tuesday 03 February 2026 06:16:18 +0000 (0:00:01.378) 0:21:31.812 ****** 2026-02-03 06:16:26.476717 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476727 | orchestrator | 2026-02-03 06:16:26.476738 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:16:26.476749 | orchestrator | Tuesday 03 February 2026 06:16:19 +0000 (0:00:01.192) 0:21:33.004 ****** 2026-02-03 06:16:26.476760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 06:16:26.476771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 06:16:26.476782 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 06:16:26.476792 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476803 | orchestrator | 2026-02-03 06:16:26.476814 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-03 06:16:26.476825 | orchestrator | Tuesday 03 February 2026 06:16:21 +0000 (0:00:01.475) 0:21:34.480 ****** 2026-02-03 06:16:26.476836 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476847 | orchestrator | 2026-02-03 06:16:26.476857 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-03 06:16:26.476868 | orchestrator | Tuesday 03 February 2026 06:16:22 +0000 (0:00:01.224) 0:21:35.704 ****** 2026-02-03 06:16:26.476879 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476890 | orchestrator | 2026-02-03 06:16:26.476901 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-03 06:16:26.476911 | orchestrator | Tuesday 03 February 2026 06:16:23 +0000 (0:00:01.207) 0:21:36.912 ****** 2026-02-03 06:16:26.476922 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476933 | orchestrator | 2026-02-03 06:16:26.476944 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-03 06:16:26.476954 | orchestrator | Tuesday 03 February 2026 06:16:24 +0000 (0:00:01.184) 0:21:38.096 ****** 2026-02-03 06:16:26.476969 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:16:26.476988 | orchestrator | 2026-02-03 06:16:26.477021 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-03 06:17:00.533319 | orchestrator | 2026-02-03 06:17:00.533426 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-03 06:17:00.533440 | orchestrator | Tuesday 03 February 2026 06:16:26 +0000 (0:00:01.549) 0:21:39.645 ****** 2026-02-03 06:17:00.533450 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533460 | orchestrator | 2026-02-03 06:17:00.533469 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:17:00.533479 | orchestrator | Tuesday 03 February 2026 06:16:27 +0000 (0:00:00.855) 0:21:40.501 ****** 2026-02-03 06:17:00.533488 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533497 | orchestrator | 2026-02-03 06:17:00.533506 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:17:00.533515 | orchestrator | Tuesday 03 February 2026 06:16:28 +0000 (0:00:00.796) 0:21:41.297 ****** 2026-02-03 06:17:00.533524 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533533 | orchestrator | 2026-02-03 06:17:00.533541 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:17:00.533550 | orchestrator | Tuesday 03 February 2026 06:16:28 +0000 (0:00:00.801) 0:21:42.098 ****** 2026-02-03 06:17:00.533575 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533584 | orchestrator | 2026-02-03 06:17:00.533594 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:17:00.533623 | orchestrator | Tuesday 03 February 2026 06:16:29 +0000 (0:00:00.844) 0:21:42.942 ****** 2026-02-03 06:17:00.533633 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533641 | orchestrator | 2026-02-03 06:17:00.533650 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:17:00.533659 | orchestrator | Tuesday 03 February 2026 06:16:30 +0000 (0:00:00.848) 0:21:43.791 ****** 2026-02-03 06:17:00.533668 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533677 | orchestrator | 2026-02-03 06:17:00.533686 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:17:00.533695 | orchestrator | Tuesday 03 February 2026 06:16:31 +0000 (0:00:00.834) 0:21:44.626 ****** 2026-02-03 06:17:00.533704 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533712 | orchestrator | 2026-02-03 06:17:00.533721 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:17:00.533730 | orchestrator | Tuesday 03 February 2026 06:16:32 +0000 (0:00:00.811) 0:21:45.437 ****** 2026-02-03 06:17:00.533739 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533748 | orchestrator | 2026-02-03 06:17:00.533757 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:17:00.533766 | orchestrator | Tuesday 03 February 2026 06:16:33 +0000 (0:00:00.942) 0:21:46.380 ****** 2026-02-03 06:17:00.533774 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533783 | orchestrator | 2026-02-03 06:17:00.533792 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:17:00.533801 | orchestrator | Tuesday 03 February 2026 06:16:34 +0000 (0:00:00.832) 0:21:47.213 ****** 2026-02-03 06:17:00.533810 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533819 | orchestrator | 2026-02-03 06:17:00.533828 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:17:00.533837 | orchestrator | Tuesday 03 February 2026 06:16:34 +0000 (0:00:00.818) 0:21:48.031 ****** 2026-02-03 06:17:00.533846 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533855 | orchestrator | 2026-02-03 06:17:00.533863 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:17:00.533872 | orchestrator | Tuesday 03 February 2026 06:16:35 +0000 (0:00:00.947) 0:21:48.978 ****** 2026-02-03 06:17:00.533881 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533890 | orchestrator | 2026-02-03 06:17:00.533899 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:17:00.533908 | orchestrator | Tuesday 03 February 2026 06:16:36 +0000 (0:00:00.990) 0:21:49.968 ****** 2026-02-03 06:17:00.533917 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533925 | orchestrator | 2026-02-03 06:17:00.533934 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:17:00.533943 | orchestrator | Tuesday 03 February 2026 06:16:37 +0000 (0:00:00.800) 0:21:50.769 ****** 2026-02-03 06:17:00.533952 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533961 | orchestrator | 2026-02-03 06:17:00.533970 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:17:00.533978 | orchestrator | Tuesday 03 February 2026 06:16:38 +0000 (0:00:00.850) 0:21:51.620 ****** 2026-02-03 06:17:00.533987 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.533996 | orchestrator | 2026-02-03 06:17:00.534005 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:17:00.534062 | orchestrator | Tuesday 03 February 2026 06:16:39 +0000 (0:00:00.834) 0:21:52.454 ****** 2026-02-03 06:17:00.534074 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534083 | orchestrator | 2026-02-03 06:17:00.534092 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:17:00.534101 | orchestrator | Tuesday 03 February 2026 06:16:40 +0000 (0:00:00.886) 0:21:53.341 ****** 2026-02-03 06:17:00.534110 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534119 | orchestrator | 2026-02-03 06:17:00.534128 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:17:00.534145 | orchestrator | Tuesday 03 February 2026 06:16:41 +0000 (0:00:00.855) 0:21:54.196 ****** 2026-02-03 06:17:00.534154 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534163 | orchestrator | 2026-02-03 06:17:00.534172 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:17:00.534181 | orchestrator | Tuesday 03 February 2026 06:16:41 +0000 (0:00:00.829) 0:21:55.026 ****** 2026-02-03 06:17:00.534189 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534198 | orchestrator | 2026-02-03 06:17:00.534207 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:17:00.534217 | orchestrator | Tuesday 03 February 2026 06:16:42 +0000 (0:00:00.834) 0:21:55.861 ****** 2026-02-03 06:17:00.534226 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534249 | orchestrator | 2026-02-03 06:17:00.534276 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:17:00.534286 | orchestrator | Tuesday 03 February 2026 06:16:43 +0000 (0:00:00.862) 0:21:56.723 ****** 2026-02-03 06:17:00.534294 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534303 | orchestrator | 2026-02-03 06:17:00.534312 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:17:00.534321 | orchestrator | Tuesday 03 February 2026 06:16:44 +0000 (0:00:00.803) 0:21:57.528 ****** 2026-02-03 06:17:00.534330 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534339 | orchestrator | 2026-02-03 06:17:00.534347 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:17:00.534356 | orchestrator | Tuesday 03 February 2026 06:16:45 +0000 (0:00:00.874) 0:21:58.402 ****** 2026-02-03 06:17:00.534365 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534374 | orchestrator | 2026-02-03 06:17:00.534383 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:17:00.534392 | orchestrator | Tuesday 03 February 2026 06:16:46 +0000 (0:00:00.837) 0:21:59.239 ****** 2026-02-03 06:17:00.534400 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534409 | orchestrator | 2026-02-03 06:17:00.534418 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:17:00.534427 | orchestrator | Tuesday 03 February 2026 06:16:47 +0000 (0:00:01.068) 0:22:00.308 ****** 2026-02-03 06:17:00.534436 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534445 | orchestrator | 2026-02-03 06:17:00.534454 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:17:00.534463 | orchestrator | Tuesday 03 February 2026 06:16:47 +0000 (0:00:00.803) 0:22:01.111 ****** 2026-02-03 06:17:00.534472 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534481 | orchestrator | 2026-02-03 06:17:00.534489 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:17:00.534498 | orchestrator | Tuesday 03 February 2026 06:16:48 +0000 (0:00:00.794) 0:22:01.906 ****** 2026-02-03 06:17:00.534507 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534516 | orchestrator | 2026-02-03 06:17:00.534525 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:17:00.534534 | orchestrator | Tuesday 03 February 2026 06:16:49 +0000 (0:00:00.813) 0:22:02.719 ****** 2026-02-03 06:17:00.534542 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534551 | orchestrator | 2026-02-03 06:17:00.534560 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:17:00.534569 | orchestrator | Tuesday 03 February 2026 06:16:50 +0000 (0:00:00.871) 0:22:03.591 ****** 2026-02-03 06:17:00.534578 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534586 | orchestrator | 2026-02-03 06:17:00.534595 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:17:00.534604 | orchestrator | Tuesday 03 February 2026 06:16:51 +0000 (0:00:00.811) 0:22:04.403 ****** 2026-02-03 06:17:00.534613 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534622 | orchestrator | 2026-02-03 06:17:00.534631 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:17:00.534646 | orchestrator | Tuesday 03 February 2026 06:16:52 +0000 (0:00:00.865) 0:22:05.269 ****** 2026-02-03 06:17:00.534655 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534664 | orchestrator | 2026-02-03 06:17:00.534673 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:17:00.534682 | orchestrator | Tuesday 03 February 2026 06:16:52 +0000 (0:00:00.869) 0:22:06.138 ****** 2026-02-03 06:17:00.534690 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534699 | orchestrator | 2026-02-03 06:17:00.534708 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:17:00.534717 | orchestrator | Tuesday 03 February 2026 06:16:53 +0000 (0:00:00.834) 0:22:06.972 ****** 2026-02-03 06:17:00.534726 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534734 | orchestrator | 2026-02-03 06:17:00.534743 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:17:00.534752 | orchestrator | Tuesday 03 February 2026 06:16:54 +0000 (0:00:00.816) 0:22:07.789 ****** 2026-02-03 06:17:00.534761 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534770 | orchestrator | 2026-02-03 06:17:00.534778 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:17:00.534787 | orchestrator | Tuesday 03 February 2026 06:16:55 +0000 (0:00:00.851) 0:22:08.640 ****** 2026-02-03 06:17:00.534796 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534805 | orchestrator | 2026-02-03 06:17:00.534844 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:17:00.534854 | orchestrator | Tuesday 03 February 2026 06:16:56 +0000 (0:00:00.959) 0:22:09.600 ****** 2026-02-03 06:17:00.534863 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534872 | orchestrator | 2026-02-03 06:17:00.534881 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:17:00.534890 | orchestrator | Tuesday 03 February 2026 06:16:57 +0000 (0:00:00.800) 0:22:10.400 ****** 2026-02-03 06:17:00.534899 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534907 | orchestrator | 2026-02-03 06:17:00.534916 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:17:00.534925 | orchestrator | Tuesday 03 February 2026 06:16:58 +0000 (0:00:00.840) 0:22:11.241 ****** 2026-02-03 06:17:00.534934 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534943 | orchestrator | 2026-02-03 06:17:00.534952 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:17:00.534961 | orchestrator | Tuesday 03 February 2026 06:16:58 +0000 (0:00:00.815) 0:22:12.056 ****** 2026-02-03 06:17:00.534970 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.534978 | orchestrator | 2026-02-03 06:17:00.534987 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:17:00.534997 | orchestrator | Tuesday 03 February 2026 06:16:59 +0000 (0:00:00.797) 0:22:12.854 ****** 2026-02-03 06:17:00.535006 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:00.535015 | orchestrator | 2026-02-03 06:17:00.535024 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:17:00.535039 | orchestrator | Tuesday 03 February 2026 06:17:00 +0000 (0:00:00.853) 0:22:13.708 ****** 2026-02-03 06:17:32.630783 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.630903 | orchestrator | 2026-02-03 06:17:32.630920 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:17:32.630933 | orchestrator | Tuesday 03 February 2026 06:17:01 +0000 (0:00:00.829) 0:22:14.537 ****** 2026-02-03 06:17:32.630944 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.630956 | orchestrator | 2026-02-03 06:17:32.630967 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:17:32.630978 | orchestrator | Tuesday 03 February 2026 06:17:02 +0000 (0:00:00.847) 0:22:15.384 ****** 2026-02-03 06:17:32.630989 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631024 | orchestrator | 2026-02-03 06:17:32.631036 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:17:32.631047 | orchestrator | Tuesday 03 February 2026 06:17:02 +0000 (0:00:00.801) 0:22:16.185 ****** 2026-02-03 06:17:32.631058 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631069 | orchestrator | 2026-02-03 06:17:32.631095 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:17:32.631106 | orchestrator | Tuesday 03 February 2026 06:17:03 +0000 (0:00:00.857) 0:22:17.043 ****** 2026-02-03 06:17:32.631117 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631128 | orchestrator | 2026-02-03 06:17:32.631138 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:17:32.631149 | orchestrator | Tuesday 03 February 2026 06:17:04 +0000 (0:00:00.786) 0:22:17.829 ****** 2026-02-03 06:17:32.631160 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631171 | orchestrator | 2026-02-03 06:17:32.631183 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:17:32.631194 | orchestrator | Tuesday 03 February 2026 06:17:05 +0000 (0:00:00.961) 0:22:18.791 ****** 2026-02-03 06:17:32.631205 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631275 | orchestrator | 2026-02-03 06:17:32.631288 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:17:32.631299 | orchestrator | Tuesday 03 February 2026 06:17:06 +0000 (0:00:00.815) 0:22:19.606 ****** 2026-02-03 06:17:32.631310 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631324 | orchestrator | 2026-02-03 06:17:32.631336 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:17:32.631349 | orchestrator | Tuesday 03 February 2026 06:17:07 +0000 (0:00:00.948) 0:22:20.554 ****** 2026-02-03 06:17:32.631361 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631373 | orchestrator | 2026-02-03 06:17:32.631386 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:17:32.631398 | orchestrator | Tuesday 03 February 2026 06:17:08 +0000 (0:00:00.912) 0:22:21.467 ****** 2026-02-03 06:17:32.631411 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631424 | orchestrator | 2026-02-03 06:17:32.631436 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:17:32.631450 | orchestrator | Tuesday 03 February 2026 06:17:09 +0000 (0:00:00.785) 0:22:22.253 ****** 2026-02-03 06:17:32.631463 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631476 | orchestrator | 2026-02-03 06:17:32.631488 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:17:32.631501 | orchestrator | Tuesday 03 February 2026 06:17:09 +0000 (0:00:00.795) 0:22:23.049 ****** 2026-02-03 06:17:32.631513 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631525 | orchestrator | 2026-02-03 06:17:32.631538 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:17:32.631551 | orchestrator | Tuesday 03 February 2026 06:17:10 +0000 (0:00:00.928) 0:22:23.978 ****** 2026-02-03 06:17:32.631563 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631575 | orchestrator | 2026-02-03 06:17:32.631588 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:17:32.631600 | orchestrator | Tuesday 03 February 2026 06:17:11 +0000 (0:00:00.814) 0:22:24.793 ****** 2026-02-03 06:17:32.631613 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631625 | orchestrator | 2026-02-03 06:17:32.631638 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:17:32.631650 | orchestrator | Tuesday 03 February 2026 06:17:12 +0000 (0:00:00.885) 0:22:25.678 ****** 2026-02-03 06:17:32.631661 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 06:17:32.631672 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 06:17:32.631683 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 06:17:32.631703 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631714 | orchestrator | 2026-02-03 06:17:32.631724 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:17:32.631735 | orchestrator | Tuesday 03 February 2026 06:17:13 +0000 (0:00:01.217) 0:22:26.896 ****** 2026-02-03 06:17:32.631746 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 06:17:32.631757 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 06:17:32.631768 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 06:17:32.631779 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631789 | orchestrator | 2026-02-03 06:17:32.631800 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:17:32.631811 | orchestrator | Tuesday 03 February 2026 06:17:14 +0000 (0:00:01.124) 0:22:28.021 ****** 2026-02-03 06:17:32.631822 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 06:17:32.631832 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 06:17:32.631843 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 06:17:32.631853 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631865 | orchestrator | 2026-02-03 06:17:32.631876 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:17:32.631886 | orchestrator | Tuesday 03 February 2026 06:17:15 +0000 (0:00:01.149) 0:22:29.170 ****** 2026-02-03 06:17:32.631914 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631926 | orchestrator | 2026-02-03 06:17:32.631937 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:17:32.631948 | orchestrator | Tuesday 03 February 2026 06:17:16 +0000 (0:00:00.817) 0:22:29.988 ****** 2026-02-03 06:17:32.631960 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-03 06:17:32.631970 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.631981 | orchestrator | 2026-02-03 06:17:32.631992 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:17:32.632003 | orchestrator | Tuesday 03 February 2026 06:17:17 +0000 (0:00:00.980) 0:22:30.969 ****** 2026-02-03 06:17:32.632013 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.632024 | orchestrator | 2026-02-03 06:17:32.632035 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:17:32.632046 | orchestrator | Tuesday 03 February 2026 06:17:18 +0000 (0:00:00.938) 0:22:31.907 ****** 2026-02-03 06:17:32.632056 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 06:17:32.632073 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 06:17:32.632084 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 06:17:32.632095 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.632105 | orchestrator | 2026-02-03 06:17:32.632116 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-03 06:17:32.632127 | orchestrator | Tuesday 03 February 2026 06:17:19 +0000 (0:00:01.108) 0:22:33.016 ****** 2026-02-03 06:17:32.632137 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.632148 | orchestrator | 2026-02-03 06:17:32.632159 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-03 06:17:32.632170 | orchestrator | Tuesday 03 February 2026 06:17:20 +0000 (0:00:00.818) 0:22:33.835 ****** 2026-02-03 06:17:32.632181 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.632192 | orchestrator | 2026-02-03 06:17:32.632202 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-03 06:17:32.632242 | orchestrator | Tuesday 03 February 2026 06:17:21 +0000 (0:00:00.806) 0:22:34.641 ****** 2026-02-03 06:17:32.632262 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.632280 | orchestrator | 2026-02-03 06:17:32.632298 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-03 06:17:32.632310 | orchestrator | Tuesday 03 February 2026 06:17:22 +0000 (0:00:00.856) 0:22:35.498 ****** 2026-02-03 06:17:32.632338 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:17:32.632356 | orchestrator | 2026-02-03 06:17:32.632374 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-03 06:17:32.632392 | orchestrator | 2026-02-03 06:17:32.632412 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-03 06:17:32.632431 | orchestrator | Tuesday 03 February 2026 06:17:23 +0000 (0:00:01.067) 0:22:36.565 ****** 2026-02-03 06:17:32.632450 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.632468 | orchestrator | 2026-02-03 06:17:32.632479 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:17:32.632490 | orchestrator | Tuesday 03 February 2026 06:17:24 +0000 (0:00:00.921) 0:22:37.487 ****** 2026-02-03 06:17:32.632500 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.632511 | orchestrator | 2026-02-03 06:17:32.632521 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:17:32.632531 | orchestrator | Tuesday 03 February 2026 06:17:25 +0000 (0:00:00.815) 0:22:38.303 ****** 2026-02-03 06:17:32.632542 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.632553 | orchestrator | 2026-02-03 06:17:32.632563 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:17:32.632574 | orchestrator | Tuesday 03 February 2026 06:17:25 +0000 (0:00:00.794) 0:22:39.097 ****** 2026-02-03 06:17:32.632584 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.632595 | orchestrator | 2026-02-03 06:17:32.632605 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:17:32.632618 | orchestrator | Tuesday 03 February 2026 06:17:26 +0000 (0:00:00.817) 0:22:39.915 ****** 2026-02-03 06:17:32.632637 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.632659 | orchestrator | 2026-02-03 06:17:32.632685 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:17:32.632703 | orchestrator | Tuesday 03 February 2026 06:17:27 +0000 (0:00:00.887) 0:22:40.803 ****** 2026-02-03 06:17:32.632720 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.632737 | orchestrator | 2026-02-03 06:17:32.632753 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:17:32.632769 | orchestrator | Tuesday 03 February 2026 06:17:28 +0000 (0:00:00.864) 0:22:41.667 ****** 2026-02-03 06:17:32.632787 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.632803 | orchestrator | 2026-02-03 06:17:32.632820 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:17:32.632839 | orchestrator | Tuesday 03 February 2026 06:17:29 +0000 (0:00:00.810) 0:22:42.478 ****** 2026-02-03 06:17:32.632857 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.632873 | orchestrator | 2026-02-03 06:17:32.632890 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:17:32.632906 | orchestrator | Tuesday 03 February 2026 06:17:30 +0000 (0:00:00.841) 0:22:43.319 ****** 2026-02-03 06:17:32.632923 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.632940 | orchestrator | 2026-02-03 06:17:32.632957 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:17:32.632974 | orchestrator | Tuesday 03 February 2026 06:17:30 +0000 (0:00:00.844) 0:22:44.164 ****** 2026-02-03 06:17:32.632993 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.633011 | orchestrator | 2026-02-03 06:17:32.633028 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:17:32.633046 | orchestrator | Tuesday 03 February 2026 06:17:31 +0000 (0:00:00.805) 0:22:44.969 ****** 2026-02-03 06:17:32.633064 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:17:32.633084 | orchestrator | 2026-02-03 06:17:32.633103 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:17:32.633136 | orchestrator | Tuesday 03 February 2026 06:17:32 +0000 (0:00:00.835) 0:22:45.805 ****** 2026-02-03 06:18:06.671300 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671444 | orchestrator | 2026-02-03 06:18:06.671474 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:18:06.671529 | orchestrator | Tuesday 03 February 2026 06:17:33 +0000 (0:00:00.812) 0:22:46.618 ****** 2026-02-03 06:18:06.671542 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671553 | orchestrator | 2026-02-03 06:18:06.671565 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:18:06.671576 | orchestrator | Tuesday 03 February 2026 06:17:34 +0000 (0:00:00.825) 0:22:47.443 ****** 2026-02-03 06:18:06.671587 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671598 | orchestrator | 2026-02-03 06:18:06.671609 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:18:06.671620 | orchestrator | Tuesday 03 February 2026 06:17:35 +0000 (0:00:00.867) 0:22:48.311 ****** 2026-02-03 06:18:06.671631 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671642 | orchestrator | 2026-02-03 06:18:06.671668 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:18:06.671679 | orchestrator | Tuesday 03 February 2026 06:17:35 +0000 (0:00:00.867) 0:22:49.179 ****** 2026-02-03 06:18:06.671690 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671703 | orchestrator | 2026-02-03 06:18:06.671716 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:18:06.671729 | orchestrator | Tuesday 03 February 2026 06:17:36 +0000 (0:00:00.789) 0:22:49.968 ****** 2026-02-03 06:18:06.671741 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671754 | orchestrator | 2026-02-03 06:18:06.671767 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:18:06.671779 | orchestrator | Tuesday 03 February 2026 06:17:37 +0000 (0:00:00.936) 0:22:50.905 ****** 2026-02-03 06:18:06.671792 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671805 | orchestrator | 2026-02-03 06:18:06.671818 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:18:06.671832 | orchestrator | Tuesday 03 February 2026 06:17:38 +0000 (0:00:00.847) 0:22:51.752 ****** 2026-02-03 06:18:06.671844 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671859 | orchestrator | 2026-02-03 06:18:06.671878 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:18:06.671894 | orchestrator | Tuesday 03 February 2026 06:17:39 +0000 (0:00:00.862) 0:22:52.615 ****** 2026-02-03 06:18:06.671907 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671919 | orchestrator | 2026-02-03 06:18:06.671932 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:18:06.671945 | orchestrator | Tuesday 03 February 2026 06:17:40 +0000 (0:00:00.904) 0:22:53.519 ****** 2026-02-03 06:18:06.671958 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.671972 | orchestrator | 2026-02-03 06:18:06.671993 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:18:06.672014 | orchestrator | Tuesday 03 February 2026 06:17:41 +0000 (0:00:00.838) 0:22:54.358 ****** 2026-02-03 06:18:06.672035 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672052 | orchestrator | 2026-02-03 06:18:06.672066 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:18:06.672076 | orchestrator | Tuesday 03 February 2026 06:17:41 +0000 (0:00:00.814) 0:22:55.172 ****** 2026-02-03 06:18:06.672087 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672098 | orchestrator | 2026-02-03 06:18:06.672109 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:18:06.672119 | orchestrator | Tuesday 03 February 2026 06:17:42 +0000 (0:00:00.792) 0:22:55.965 ****** 2026-02-03 06:18:06.672130 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672141 | orchestrator | 2026-02-03 06:18:06.672151 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:18:06.672162 | orchestrator | Tuesday 03 February 2026 06:17:43 +0000 (0:00:00.844) 0:22:56.809 ****** 2026-02-03 06:18:06.672173 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672184 | orchestrator | 2026-02-03 06:18:06.672231 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:18:06.672243 | orchestrator | Tuesday 03 February 2026 06:17:44 +0000 (0:00:00.854) 0:22:57.664 ****** 2026-02-03 06:18:06.672254 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672265 | orchestrator | 2026-02-03 06:18:06.672276 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:18:06.672287 | orchestrator | Tuesday 03 February 2026 06:17:45 +0000 (0:00:00.880) 0:22:58.545 ****** 2026-02-03 06:18:06.672298 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672309 | orchestrator | 2026-02-03 06:18:06.672320 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:18:06.672331 | orchestrator | Tuesday 03 February 2026 06:17:46 +0000 (0:00:00.828) 0:22:59.373 ****** 2026-02-03 06:18:06.672342 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672353 | orchestrator | 2026-02-03 06:18:06.672364 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:18:06.672375 | orchestrator | Tuesday 03 February 2026 06:17:47 +0000 (0:00:00.905) 0:23:00.279 ****** 2026-02-03 06:18:06.672386 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672397 | orchestrator | 2026-02-03 06:18:06.672407 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:18:06.672418 | orchestrator | Tuesday 03 February 2026 06:17:47 +0000 (0:00:00.844) 0:23:01.124 ****** 2026-02-03 06:18:06.672429 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672440 | orchestrator | 2026-02-03 06:18:06.672451 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:18:06.672468 | orchestrator | Tuesday 03 February 2026 06:17:48 +0000 (0:00:00.879) 0:23:02.004 ****** 2026-02-03 06:18:06.672486 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672502 | orchestrator | 2026-02-03 06:18:06.672520 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:18:06.672540 | orchestrator | Tuesday 03 February 2026 06:17:49 +0000 (0:00:00.813) 0:23:02.817 ****** 2026-02-03 06:18:06.672561 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672579 | orchestrator | 2026-02-03 06:18:06.672617 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:18:06.672629 | orchestrator | Tuesday 03 February 2026 06:17:50 +0000 (0:00:00.830) 0:23:03.648 ****** 2026-02-03 06:18:06.672640 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672651 | orchestrator | 2026-02-03 06:18:06.672662 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:18:06.672673 | orchestrator | Tuesday 03 February 2026 06:17:51 +0000 (0:00:00.843) 0:23:04.492 ****** 2026-02-03 06:18:06.672683 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672694 | orchestrator | 2026-02-03 06:18:06.672705 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:18:06.672716 | orchestrator | Tuesday 03 February 2026 06:17:52 +0000 (0:00:00.805) 0:23:05.297 ****** 2026-02-03 06:18:06.672726 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672737 | orchestrator | 2026-02-03 06:18:06.672748 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:18:06.672766 | orchestrator | Tuesday 03 February 2026 06:17:52 +0000 (0:00:00.782) 0:23:06.080 ****** 2026-02-03 06:18:06.672777 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672787 | orchestrator | 2026-02-03 06:18:06.672798 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:18:06.672809 | orchestrator | Tuesday 03 February 2026 06:17:53 +0000 (0:00:00.806) 0:23:06.886 ****** 2026-02-03 06:18:06.672819 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672830 | orchestrator | 2026-02-03 06:18:06.672841 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:18:06.672852 | orchestrator | Tuesday 03 February 2026 06:17:54 +0000 (0:00:00.860) 0:23:07.747 ****** 2026-02-03 06:18:06.672863 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672882 | orchestrator | 2026-02-03 06:18:06.672892 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:18:06.672904 | orchestrator | Tuesday 03 February 2026 06:17:55 +0000 (0:00:00.939) 0:23:08.687 ****** 2026-02-03 06:18:06.672914 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672925 | orchestrator | 2026-02-03 06:18:06.672936 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:18:06.672947 | orchestrator | Tuesday 03 February 2026 06:17:56 +0000 (0:00:00.815) 0:23:09.502 ****** 2026-02-03 06:18:06.672958 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.672969 | orchestrator | 2026-02-03 06:18:06.672980 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:18:06.672991 | orchestrator | Tuesday 03 February 2026 06:17:57 +0000 (0:00:00.843) 0:23:10.346 ****** 2026-02-03 06:18:06.673002 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673013 | orchestrator | 2026-02-03 06:18:06.673024 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:18:06.673035 | orchestrator | Tuesday 03 February 2026 06:17:57 +0000 (0:00:00.802) 0:23:11.148 ****** 2026-02-03 06:18:06.673046 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673057 | orchestrator | 2026-02-03 06:18:06.673068 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:18:06.673079 | orchestrator | Tuesday 03 February 2026 06:17:58 +0000 (0:00:00.871) 0:23:12.019 ****** 2026-02-03 06:18:06.673089 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673100 | orchestrator | 2026-02-03 06:18:06.673111 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:18:06.673122 | orchestrator | Tuesday 03 February 2026 06:17:59 +0000 (0:00:00.829) 0:23:12.849 ****** 2026-02-03 06:18:06.673133 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673143 | orchestrator | 2026-02-03 06:18:06.673154 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:18:06.673165 | orchestrator | Tuesday 03 February 2026 06:18:00 +0000 (0:00:00.892) 0:23:13.742 ****** 2026-02-03 06:18:06.673176 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673209 | orchestrator | 2026-02-03 06:18:06.673222 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:18:06.673233 | orchestrator | Tuesday 03 February 2026 06:18:01 +0000 (0:00:00.790) 0:23:14.532 ****** 2026-02-03 06:18:06.673243 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673254 | orchestrator | 2026-02-03 06:18:06.673265 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:18:06.673276 | orchestrator | Tuesday 03 February 2026 06:18:02 +0000 (0:00:00.936) 0:23:15.469 ****** 2026-02-03 06:18:06.673286 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673297 | orchestrator | 2026-02-03 06:18:06.673308 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:18:06.673319 | orchestrator | Tuesday 03 February 2026 06:18:03 +0000 (0:00:00.847) 0:23:16.316 ****** 2026-02-03 06:18:06.673330 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673341 | orchestrator | 2026-02-03 06:18:06.673352 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:18:06.673362 | orchestrator | Tuesday 03 February 2026 06:18:04 +0000 (0:00:00.958) 0:23:17.275 ****** 2026-02-03 06:18:06.673373 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673384 | orchestrator | 2026-02-03 06:18:06.673395 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:18:06.673405 | orchestrator | Tuesday 03 February 2026 06:18:04 +0000 (0:00:00.816) 0:23:18.092 ****** 2026-02-03 06:18:06.673416 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673427 | orchestrator | 2026-02-03 06:18:06.673438 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:18:06.673456 | orchestrator | Tuesday 03 February 2026 06:18:05 +0000 (0:00:00.857) 0:23:18.949 ****** 2026-02-03 06:18:06.673467 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:06.673478 | orchestrator | 2026-02-03 06:18:06.673489 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:18:06.673508 | orchestrator | Tuesday 03 February 2026 06:18:06 +0000 (0:00:00.895) 0:23:19.845 ****** 2026-02-03 06:18:48.165538 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.165677 | orchestrator | 2026-02-03 06:18:48.165699 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:18:48.165717 | orchestrator | Tuesday 03 February 2026 06:18:07 +0000 (0:00:00.867) 0:23:20.713 ****** 2026-02-03 06:18:48.165733 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.165747 | orchestrator | 2026-02-03 06:18:48.165762 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:18:48.165777 | orchestrator | Tuesday 03 February 2026 06:18:08 +0000 (0:00:00.811) 0:23:21.525 ****** 2026-02-03 06:18:48.165794 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.165809 | orchestrator | 2026-02-03 06:18:48.165824 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:18:48.165839 | orchestrator | Tuesday 03 February 2026 06:18:09 +0000 (0:00:00.854) 0:23:22.379 ****** 2026-02-03 06:18:48.165874 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 06:18:48.165889 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 06:18:48.165904 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 06:18:48.165918 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.165933 | orchestrator | 2026-02-03 06:18:48.165947 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:18:48.165961 | orchestrator | Tuesday 03 February 2026 06:18:10 +0000 (0:00:01.588) 0:23:23.968 ****** 2026-02-03 06:18:48.165976 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 06:18:48.165990 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 06:18:48.166004 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 06:18:48.166084 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166104 | orchestrator | 2026-02-03 06:18:48.166121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:18:48.166135 | orchestrator | Tuesday 03 February 2026 06:18:11 +0000 (0:00:01.157) 0:23:25.126 ****** 2026-02-03 06:18:48.166151 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 06:18:48.166195 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 06:18:48.166211 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 06:18:48.166222 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166232 | orchestrator | 2026-02-03 06:18:48.166243 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:18:48.166254 | orchestrator | Tuesday 03 February 2026 06:18:13 +0000 (0:00:01.092) 0:23:26.218 ****** 2026-02-03 06:18:48.166264 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166274 | orchestrator | 2026-02-03 06:18:48.166285 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:18:48.166295 | orchestrator | Tuesday 03 February 2026 06:18:13 +0000 (0:00:00.814) 0:23:27.033 ****** 2026-02-03 06:18:48.166307 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-03 06:18:48.166317 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166328 | orchestrator | 2026-02-03 06:18:48.166338 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:18:48.166348 | orchestrator | Tuesday 03 February 2026 06:18:14 +0000 (0:00:00.955) 0:23:27.989 ****** 2026-02-03 06:18:48.166358 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166369 | orchestrator | 2026-02-03 06:18:48.166379 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:18:48.166389 | orchestrator | Tuesday 03 February 2026 06:18:15 +0000 (0:00:01.051) 0:23:29.040 ****** 2026-02-03 06:18:48.166424 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 06:18:48.166433 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 06:18:48.166442 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 06:18:48.166450 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166459 | orchestrator | 2026-02-03 06:18:48.166467 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-03 06:18:48.166476 | orchestrator | Tuesday 03 February 2026 06:18:16 +0000 (0:00:01.132) 0:23:30.172 ****** 2026-02-03 06:18:48.166485 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166493 | orchestrator | 2026-02-03 06:18:48.166501 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-03 06:18:48.166509 | orchestrator | Tuesday 03 February 2026 06:18:17 +0000 (0:00:00.872) 0:23:31.045 ****** 2026-02-03 06:18:48.166517 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166525 | orchestrator | 2026-02-03 06:18:48.166532 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-03 06:18:48.166540 | orchestrator | Tuesday 03 February 2026 06:18:18 +0000 (0:00:00.864) 0:23:31.909 ****** 2026-02-03 06:18:48.166548 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166556 | orchestrator | 2026-02-03 06:18:48.166563 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-03 06:18:48.166571 | orchestrator | Tuesday 03 February 2026 06:18:19 +0000 (0:00:00.811) 0:23:32.721 ****** 2026-02-03 06:18:48.166579 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:18:48.166587 | orchestrator | 2026-02-03 06:18:48.166595 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-03 06:18:48.166602 | orchestrator | 2026-02-03 06:18:48.166610 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-03 06:18:48.166618 | orchestrator | Tuesday 03 February 2026 06:18:21 +0000 (0:00:01.849) 0:23:34.571 ****** 2026-02-03 06:18:48.166626 | orchestrator | changed: [testbed-node-0] 2026-02-03 06:18:48.166634 | orchestrator | 2026-02-03 06:18:48.166642 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-03 06:18:48.166649 | orchestrator | Tuesday 03 February 2026 06:18:24 +0000 (0:00:03.039) 0:23:37.611 ****** 2026-02-03 06:18:48.166657 | orchestrator | changed: [testbed-node-0] 2026-02-03 06:18:48.166665 | orchestrator | 2026-02-03 06:18:48.166673 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:18:48.166698 | orchestrator | Tuesday 03 February 2026 06:18:27 +0000 (0:00:02.650) 0:23:40.261 ****** 2026-02-03 06:18:48.166707 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-03 06:18:48.166715 | orchestrator | 2026-02-03 06:18:48.166723 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:18:48.166731 | orchestrator | Tuesday 03 February 2026 06:18:28 +0000 (0:00:01.164) 0:23:41.426 ****** 2026-02-03 06:18:48.166738 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:18:48.166747 | orchestrator | 2026-02-03 06:18:48.166754 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:18:48.166762 | orchestrator | Tuesday 03 February 2026 06:18:29 +0000 (0:00:01.583) 0:23:43.009 ****** 2026-02-03 06:18:48.166770 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:18:48.166778 | orchestrator | 2026-02-03 06:18:48.166786 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:18:48.166801 | orchestrator | Tuesday 03 February 2026 06:18:31 +0000 (0:00:01.221) 0:23:44.231 ****** 2026-02-03 06:18:48.166809 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:18:48.166817 | orchestrator | 2026-02-03 06:18:48.166825 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:18:48.166832 | orchestrator | Tuesday 03 February 2026 06:18:32 +0000 (0:00:01.531) 0:23:45.762 ****** 2026-02-03 06:18:48.166840 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:18:48.166855 | orchestrator | 2026-02-03 06:18:48.166863 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:18:48.166871 | orchestrator | Tuesday 03 February 2026 06:18:33 +0000 (0:00:01.191) 0:23:46.954 ****** 2026-02-03 06:18:48.166879 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:18:48.166886 | orchestrator | 2026-02-03 06:18:48.166894 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:18:48.166902 | orchestrator | Tuesday 03 February 2026 06:18:34 +0000 (0:00:01.223) 0:23:48.178 ****** 2026-02-03 06:18:48.166910 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:18:48.166918 | orchestrator | 2026-02-03 06:18:48.166925 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:18:48.166934 | orchestrator | Tuesday 03 February 2026 06:18:36 +0000 (0:00:01.250) 0:23:49.428 ****** 2026-02-03 06:18:48.166942 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:18:48.166950 | orchestrator | 2026-02-03 06:18:48.166958 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:18:48.166965 | orchestrator | Tuesday 03 February 2026 06:18:37 +0000 (0:00:01.203) 0:23:50.631 ****** 2026-02-03 06:18:48.166973 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:18:48.166981 | orchestrator | 2026-02-03 06:18:48.166989 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:18:48.166996 | orchestrator | Tuesday 03 February 2026 06:18:38 +0000 (0:00:01.238) 0:23:51.870 ****** 2026-02-03 06:18:48.167004 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:18:48.167012 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:18:48.167020 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:18:48.167028 | orchestrator | 2026-02-03 06:18:48.167036 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:18:48.167044 | orchestrator | Tuesday 03 February 2026 06:18:40 +0000 (0:00:01.853) 0:23:53.724 ****** 2026-02-03 06:18:48.167052 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:18:48.167059 | orchestrator | 2026-02-03 06:18:48.167067 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:18:48.167075 | orchestrator | Tuesday 03 February 2026 06:18:41 +0000 (0:00:01.266) 0:23:54.991 ****** 2026-02-03 06:18:48.167083 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:18:48.167091 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:18:48.167099 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:18:48.167107 | orchestrator | 2026-02-03 06:18:48.167114 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:18:48.167122 | orchestrator | Tuesday 03 February 2026 06:18:44 +0000 (0:00:02.978) 0:23:57.969 ****** 2026-02-03 06:18:48.167130 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 06:18:48.167138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 06:18:48.167146 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 06:18:48.167154 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:18:48.167162 | orchestrator | 2026-02-03 06:18:48.167190 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:18:48.167204 | orchestrator | Tuesday 03 February 2026 06:18:46 +0000 (0:00:01.615) 0:23:59.584 ****** 2026-02-03 06:18:48.167219 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:18:48.167235 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:18:48.167257 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:18:48.167269 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:18:48.167283 | orchestrator | 2026-02-03 06:18:48.167297 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:18:48.167320 | orchestrator | Tuesday 03 February 2026 06:18:48 +0000 (0:00:01.756) 0:24:01.341 ****** 2026-02-03 06:19:10.600218 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:10.600357 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:10.600381 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:10.600402 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.600422 | orchestrator | 2026-02-03 06:19:10.600443 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:19:10.600464 | orchestrator | Tuesday 03 February 2026 06:18:49 +0000 (0:00:01.215) 0:24:02.556 ****** 2026-02-03 06:19:10.600485 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:18:42.351572', 'end': '2026-02-03 06:18:42.395704', 'delta': '0:00:00.044132', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:19:10.600511 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:18:42.905406', 'end': '2026-02-03 06:18:42.943614', 'delta': '0:00:00.038208', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:19:10.600532 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:18:43.482453', 'end': '2026-02-03 06:18:43.532863', 'delta': '0:00:00.050410', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:19:10.600569 | orchestrator | 2026-02-03 06:19:10.600582 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:19:10.600593 | orchestrator | Tuesday 03 February 2026 06:18:50 +0000 (0:00:01.319) 0:24:03.876 ****** 2026-02-03 06:19:10.600604 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:19:10.600616 | orchestrator | 2026-02-03 06:19:10.600627 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:19:10.600656 | orchestrator | Tuesday 03 February 2026 06:18:51 +0000 (0:00:01.299) 0:24:05.176 ****** 2026-02-03 06:19:10.600668 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.600679 | orchestrator | 2026-02-03 06:19:10.600690 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:19:10.600703 | orchestrator | Tuesday 03 February 2026 06:18:53 +0000 (0:00:01.377) 0:24:06.554 ****** 2026-02-03 06:19:10.600715 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:19:10.600729 | orchestrator | 2026-02-03 06:19:10.600741 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:19:10.600754 | orchestrator | Tuesday 03 February 2026 06:18:54 +0000 (0:00:01.233) 0:24:07.787 ****** 2026-02-03 06:19:10.600773 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:19:10.600792 | orchestrator | 2026-02-03 06:19:10.600810 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:19:10.600829 | orchestrator | Tuesday 03 February 2026 06:18:56 +0000 (0:00:02.169) 0:24:09.957 ****** 2026-02-03 06:19:10.600857 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:19:10.600878 | orchestrator | 2026-02-03 06:19:10.600892 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:19:10.600904 | orchestrator | Tuesday 03 February 2026 06:18:57 +0000 (0:00:01.215) 0:24:11.173 ****** 2026-02-03 06:19:10.600917 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.600929 | orchestrator | 2026-02-03 06:19:10.600942 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:19:10.600955 | orchestrator | Tuesday 03 February 2026 06:18:59 +0000 (0:00:01.142) 0:24:12.316 ****** 2026-02-03 06:19:10.600965 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.600976 | orchestrator | 2026-02-03 06:19:10.600987 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:19:10.600997 | orchestrator | Tuesday 03 February 2026 06:19:00 +0000 (0:00:01.786) 0:24:14.103 ****** 2026-02-03 06:19:10.601008 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.601019 | orchestrator | 2026-02-03 06:19:10.601029 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:19:10.601040 | orchestrator | Tuesday 03 February 2026 06:19:02 +0000 (0:00:01.194) 0:24:15.298 ****** 2026-02-03 06:19:10.601051 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.601061 | orchestrator | 2026-02-03 06:19:10.601072 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:19:10.601083 | orchestrator | Tuesday 03 February 2026 06:19:03 +0000 (0:00:01.181) 0:24:16.479 ****** 2026-02-03 06:19:10.601093 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.601104 | orchestrator | 2026-02-03 06:19:10.601114 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:19:10.601125 | orchestrator | Tuesday 03 February 2026 06:19:04 +0000 (0:00:01.177) 0:24:17.656 ****** 2026-02-03 06:19:10.601139 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.601188 | orchestrator | 2026-02-03 06:19:10.601208 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:19:10.601242 | orchestrator | Tuesday 03 February 2026 06:19:05 +0000 (0:00:01.243) 0:24:18.900 ****** 2026-02-03 06:19:10.601262 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.601274 | orchestrator | 2026-02-03 06:19:10.601285 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:19:10.601296 | orchestrator | Tuesday 03 February 2026 06:19:06 +0000 (0:00:01.183) 0:24:20.084 ****** 2026-02-03 06:19:10.601306 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.601318 | orchestrator | 2026-02-03 06:19:10.601329 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:19:10.601340 | orchestrator | Tuesday 03 February 2026 06:19:08 +0000 (0:00:01.241) 0:24:21.325 ****** 2026-02-03 06:19:10.601351 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:10.601362 | orchestrator | 2026-02-03 06:19:10.601373 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:19:10.601383 | orchestrator | Tuesday 03 February 2026 06:19:09 +0000 (0:00:01.155) 0:24:22.481 ****** 2026-02-03 06:19:10.601395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:19:10.601407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:19:10.601418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:19:10.601440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:19:11.882078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:19:11.882237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:19:11.882255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:19:11.882292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8b2ebf21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:19:11.882308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:19:11.882340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:19:11.882353 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:11.882366 | orchestrator | 2026-02-03 06:19:11.882386 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:19:11.882399 | orchestrator | Tuesday 03 February 2026 06:19:10 +0000 (0:00:01.289) 0:24:23.770 ****** 2026-02-03 06:19:11.882412 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:11.882433 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:11.882445 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:11.882458 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:11.882470 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:11.882489 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:30.549125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:30.549339 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8b2ebf21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:30.549371 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:30.549416 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:19:30.549449 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:30.549474 | orchestrator | 2026-02-03 06:19:30.549497 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:19:30.549518 | orchestrator | Tuesday 03 February 2026 06:19:11 +0000 (0:00:01.292) 0:24:25.063 ****** 2026-02-03 06:19:30.549541 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:19:30.549553 | orchestrator | 2026-02-03 06:19:30.549564 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:19:30.549575 | orchestrator | Tuesday 03 February 2026 06:19:13 +0000 (0:00:01.600) 0:24:26.663 ****** 2026-02-03 06:19:30.549586 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:19:30.549597 | orchestrator | 2026-02-03 06:19:30.549610 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:19:30.549623 | orchestrator | Tuesday 03 February 2026 06:19:14 +0000 (0:00:01.259) 0:24:27.923 ****** 2026-02-03 06:19:30.549635 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:19:30.549648 | orchestrator | 2026-02-03 06:19:30.549660 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:19:30.549673 | orchestrator | Tuesday 03 February 2026 06:19:16 +0000 (0:00:01.581) 0:24:29.504 ****** 2026-02-03 06:19:30.549685 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:30.549698 | orchestrator | 2026-02-03 06:19:30.549710 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:19:30.549723 | orchestrator | Tuesday 03 February 2026 06:19:17 +0000 (0:00:01.310) 0:24:30.815 ****** 2026-02-03 06:19:30.549736 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:30.549749 | orchestrator | 2026-02-03 06:19:30.549761 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:19:30.549772 | orchestrator | Tuesday 03 February 2026 06:19:18 +0000 (0:00:01.332) 0:24:32.147 ****** 2026-02-03 06:19:30.549783 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:30.549793 | orchestrator | 2026-02-03 06:19:30.549804 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:19:30.549815 | orchestrator | Tuesday 03 February 2026 06:19:20 +0000 (0:00:01.232) 0:24:33.381 ****** 2026-02-03 06:19:30.549826 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:19:30.549837 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-03 06:19:30.549847 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-03 06:19:30.549865 | orchestrator | 2026-02-03 06:19:30.549883 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:19:30.549901 | orchestrator | Tuesday 03 February 2026 06:19:22 +0000 (0:00:01.845) 0:24:35.226 ****** 2026-02-03 06:19:30.549921 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 06:19:30.549939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 06:19:30.549959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 06:19:30.549970 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:30.549981 | orchestrator | 2026-02-03 06:19:30.549992 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:19:30.550003 | orchestrator | Tuesday 03 February 2026 06:19:23 +0000 (0:00:01.295) 0:24:36.522 ****** 2026-02-03 06:19:30.550013 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:19:30.550085 | orchestrator | 2026-02-03 06:19:30.550096 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:19:30.550107 | orchestrator | Tuesday 03 February 2026 06:19:24 +0000 (0:00:01.192) 0:24:37.714 ****** 2026-02-03 06:19:30.550118 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:19:30.550129 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:19:30.550187 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:19:30.550210 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:19:30.550222 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:19:30.550233 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:19:30.550243 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:19:30.550264 | orchestrator | 2026-02-03 06:19:30.550276 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:19:30.550287 | orchestrator | Tuesday 03 February 2026 06:19:26 +0000 (0:00:02.011) 0:24:39.725 ****** 2026-02-03 06:19:30.550298 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:19:30.550309 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:19:30.550320 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:19:30.550331 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:19:30.550342 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:19:30.550360 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:19:30.550377 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:19:30.550396 | orchestrator | 2026-02-03 06:19:30.550413 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:19:30.550429 | orchestrator | Tuesday 03 February 2026 06:19:29 +0000 (0:00:02.781) 0:24:42.507 ****** 2026-02-03 06:19:30.550448 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-03 06:19:30.550466 | orchestrator | 2026-02-03 06:19:30.550497 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:20:24.070873 | orchestrator | Tuesday 03 February 2026 06:19:30 +0000 (0:00:01.214) 0:24:43.721 ****** 2026-02-03 06:20:24.071009 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-03 06:20:24.071027 | orchestrator | 2026-02-03 06:20:24.071039 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:20:24.071051 | orchestrator | Tuesday 03 February 2026 06:19:31 +0000 (0:00:01.214) 0:24:44.936 ****** 2026-02-03 06:20:24.071062 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.071075 | orchestrator | 2026-02-03 06:20:24.071086 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:20:24.071179 | orchestrator | Tuesday 03 February 2026 06:19:33 +0000 (0:00:01.636) 0:24:46.573 ****** 2026-02-03 06:20:24.071205 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071223 | orchestrator | 2026-02-03 06:20:24.071234 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:20:24.071245 | orchestrator | Tuesday 03 February 2026 06:19:34 +0000 (0:00:01.272) 0:24:47.845 ****** 2026-02-03 06:20:24.071256 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071267 | orchestrator | 2026-02-03 06:20:24.071279 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:20:24.071290 | orchestrator | Tuesday 03 February 2026 06:19:35 +0000 (0:00:01.272) 0:24:49.118 ****** 2026-02-03 06:20:24.071301 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071312 | orchestrator | 2026-02-03 06:20:24.071323 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:20:24.071334 | orchestrator | Tuesday 03 February 2026 06:19:37 +0000 (0:00:01.232) 0:24:50.350 ****** 2026-02-03 06:20:24.071345 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.071356 | orchestrator | 2026-02-03 06:20:24.071367 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:20:24.071377 | orchestrator | Tuesday 03 February 2026 06:19:38 +0000 (0:00:01.648) 0:24:51.999 ****** 2026-02-03 06:20:24.071389 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071401 | orchestrator | 2026-02-03 06:20:24.071414 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:20:24.071427 | orchestrator | Tuesday 03 February 2026 06:19:39 +0000 (0:00:01.157) 0:24:53.157 ****** 2026-02-03 06:20:24.071440 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071478 | orchestrator | 2026-02-03 06:20:24.071490 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:20:24.071501 | orchestrator | Tuesday 03 February 2026 06:19:41 +0000 (0:00:01.231) 0:24:54.388 ****** 2026-02-03 06:20:24.071512 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.071523 | orchestrator | 2026-02-03 06:20:24.071534 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:20:24.071545 | orchestrator | Tuesday 03 February 2026 06:19:42 +0000 (0:00:01.596) 0:24:55.985 ****** 2026-02-03 06:20:24.071556 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.071568 | orchestrator | 2026-02-03 06:20:24.071579 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:20:24.071590 | orchestrator | Tuesday 03 February 2026 06:19:44 +0000 (0:00:01.631) 0:24:57.617 ****** 2026-02-03 06:20:24.071600 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071611 | orchestrator | 2026-02-03 06:20:24.071622 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:20:24.071633 | orchestrator | Tuesday 03 February 2026 06:19:45 +0000 (0:00:01.250) 0:24:58.867 ****** 2026-02-03 06:20:24.071644 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.071655 | orchestrator | 2026-02-03 06:20:24.071666 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:20:24.071676 | orchestrator | Tuesday 03 February 2026 06:19:46 +0000 (0:00:01.237) 0:25:00.105 ****** 2026-02-03 06:20:24.071687 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071698 | orchestrator | 2026-02-03 06:20:24.071709 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:20:24.071720 | orchestrator | Tuesday 03 February 2026 06:19:48 +0000 (0:00:01.170) 0:25:01.275 ****** 2026-02-03 06:20:24.071731 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071742 | orchestrator | 2026-02-03 06:20:24.071753 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:20:24.071763 | orchestrator | Tuesday 03 February 2026 06:19:49 +0000 (0:00:01.143) 0:25:02.419 ****** 2026-02-03 06:20:24.071774 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071785 | orchestrator | 2026-02-03 06:20:24.071796 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:20:24.071807 | orchestrator | Tuesday 03 February 2026 06:19:50 +0000 (0:00:01.297) 0:25:03.717 ****** 2026-02-03 06:20:24.071818 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071829 | orchestrator | 2026-02-03 06:20:24.071839 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:20:24.071850 | orchestrator | Tuesday 03 February 2026 06:19:51 +0000 (0:00:01.179) 0:25:04.896 ****** 2026-02-03 06:20:24.071861 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.071872 | orchestrator | 2026-02-03 06:20:24.071883 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:20:24.071893 | orchestrator | Tuesday 03 February 2026 06:19:52 +0000 (0:00:01.165) 0:25:06.062 ****** 2026-02-03 06:20:24.071905 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.071916 | orchestrator | 2026-02-03 06:20:24.071927 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:20:24.071938 | orchestrator | Tuesday 03 February 2026 06:19:54 +0000 (0:00:01.195) 0:25:07.258 ****** 2026-02-03 06:20:24.071948 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.071959 | orchestrator | 2026-02-03 06:20:24.071970 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:20:24.071981 | orchestrator | Tuesday 03 February 2026 06:19:55 +0000 (0:00:01.219) 0:25:08.477 ****** 2026-02-03 06:20:24.071992 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.072003 | orchestrator | 2026-02-03 06:20:24.072014 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:20:24.072045 | orchestrator | Tuesday 03 February 2026 06:19:56 +0000 (0:00:01.227) 0:25:09.705 ****** 2026-02-03 06:20:24.072066 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072124 | orchestrator | 2026-02-03 06:20:24.072146 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:20:24.072157 | orchestrator | Tuesday 03 February 2026 06:19:57 +0000 (0:00:01.158) 0:25:10.863 ****** 2026-02-03 06:20:24.072168 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072179 | orchestrator | 2026-02-03 06:20:24.072190 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:20:24.072201 | orchestrator | Tuesday 03 February 2026 06:19:58 +0000 (0:00:01.190) 0:25:12.054 ****** 2026-02-03 06:20:24.072212 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072222 | orchestrator | 2026-02-03 06:20:24.072233 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:20:24.072244 | orchestrator | Tuesday 03 February 2026 06:20:00 +0000 (0:00:01.187) 0:25:13.241 ****** 2026-02-03 06:20:24.072255 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072266 | orchestrator | 2026-02-03 06:20:24.072277 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:20:24.072287 | orchestrator | Tuesday 03 February 2026 06:20:01 +0000 (0:00:01.198) 0:25:14.440 ****** 2026-02-03 06:20:24.072298 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072309 | orchestrator | 2026-02-03 06:20:24.072320 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:20:24.072330 | orchestrator | Tuesday 03 February 2026 06:20:02 +0000 (0:00:01.181) 0:25:15.621 ****** 2026-02-03 06:20:24.072341 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072352 | orchestrator | 2026-02-03 06:20:24.072363 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:20:24.072374 | orchestrator | Tuesday 03 February 2026 06:20:03 +0000 (0:00:01.167) 0:25:16.789 ****** 2026-02-03 06:20:24.072384 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072395 | orchestrator | 2026-02-03 06:20:24.072406 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:20:24.072418 | orchestrator | Tuesday 03 February 2026 06:20:04 +0000 (0:00:01.244) 0:25:18.033 ****** 2026-02-03 06:20:24.072429 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072440 | orchestrator | 2026-02-03 06:20:24.072450 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:20:24.072461 | orchestrator | Tuesday 03 February 2026 06:20:06 +0000 (0:00:01.206) 0:25:19.240 ****** 2026-02-03 06:20:24.072472 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072482 | orchestrator | 2026-02-03 06:20:24.072493 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:20:24.072504 | orchestrator | Tuesday 03 February 2026 06:20:07 +0000 (0:00:01.174) 0:25:20.414 ****** 2026-02-03 06:20:24.072515 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072526 | orchestrator | 2026-02-03 06:20:24.072536 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:20:24.072547 | orchestrator | Tuesday 03 February 2026 06:20:08 +0000 (0:00:01.298) 0:25:21.713 ****** 2026-02-03 06:20:24.072558 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072569 | orchestrator | 2026-02-03 06:20:24.072580 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:20:24.072591 | orchestrator | Tuesday 03 February 2026 06:20:09 +0000 (0:00:01.183) 0:25:22.897 ****** 2026-02-03 06:20:24.072602 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072613 | orchestrator | 2026-02-03 06:20:24.072623 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:20:24.072634 | orchestrator | Tuesday 03 February 2026 06:20:10 +0000 (0:00:01.183) 0:25:24.081 ****** 2026-02-03 06:20:24.072645 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.072656 | orchestrator | 2026-02-03 06:20:24.072667 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:20:24.072678 | orchestrator | Tuesday 03 February 2026 06:20:12 +0000 (0:00:02.025) 0:25:26.106 ****** 2026-02-03 06:20:24.072688 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.072706 | orchestrator | 2026-02-03 06:20:24.072717 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:20:24.072728 | orchestrator | Tuesday 03 February 2026 06:20:15 +0000 (0:00:02.695) 0:25:28.802 ****** 2026-02-03 06:20:24.072739 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-03 06:20:24.072750 | orchestrator | 2026-02-03 06:20:24.072761 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:20:24.072771 | orchestrator | Tuesday 03 February 2026 06:20:16 +0000 (0:00:01.177) 0:25:29.979 ****** 2026-02-03 06:20:24.072782 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072793 | orchestrator | 2026-02-03 06:20:24.072803 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:20:24.072814 | orchestrator | Tuesday 03 February 2026 06:20:18 +0000 (0:00:01.297) 0:25:31.276 ****** 2026-02-03 06:20:24.072825 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072836 | orchestrator | 2026-02-03 06:20:24.072846 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:20:24.072857 | orchestrator | Tuesday 03 February 2026 06:20:19 +0000 (0:00:01.185) 0:25:32.462 ****** 2026-02-03 06:20:24.072868 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:20:24.072879 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:20:24.072890 | orchestrator | 2026-02-03 06:20:24.072901 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:20:24.072912 | orchestrator | Tuesday 03 February 2026 06:20:21 +0000 (0:00:02.018) 0:25:34.480 ****** 2026-02-03 06:20:24.072922 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:20:24.072933 | orchestrator | 2026-02-03 06:20:24.072944 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:20:24.072955 | orchestrator | Tuesday 03 February 2026 06:20:22 +0000 (0:00:01.554) 0:25:36.035 ****** 2026-02-03 06:20:24.072966 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:20:24.072977 | orchestrator | 2026-02-03 06:20:24.072996 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:21:12.717184 | orchestrator | Tuesday 03 February 2026 06:20:24 +0000 (0:00:01.212) 0:25:37.247 ****** 2026-02-03 06:21:12.717304 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.717321 | orchestrator | 2026-02-03 06:21:12.717335 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:21:12.717346 | orchestrator | Tuesday 03 February 2026 06:20:25 +0000 (0:00:01.359) 0:25:38.606 ****** 2026-02-03 06:21:12.717358 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.717369 | orchestrator | 2026-02-03 06:21:12.717381 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:21:12.717392 | orchestrator | Tuesday 03 February 2026 06:20:26 +0000 (0:00:01.200) 0:25:39.807 ****** 2026-02-03 06:21:12.717403 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-03 06:21:12.717415 | orchestrator | 2026-02-03 06:21:12.717426 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:21:12.717437 | orchestrator | Tuesday 03 February 2026 06:20:27 +0000 (0:00:01.178) 0:25:40.986 ****** 2026-02-03 06:21:12.717449 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:21:12.717461 | orchestrator | 2026-02-03 06:21:12.717472 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:21:12.717484 | orchestrator | Tuesday 03 February 2026 06:20:29 +0000 (0:00:01.864) 0:25:42.850 ****** 2026-02-03 06:21:12.717495 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:21:12.717506 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:21:12.717517 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:21:12.717551 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.717563 | orchestrator | 2026-02-03 06:21:12.717575 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:21:12.717586 | orchestrator | Tuesday 03 February 2026 06:20:30 +0000 (0:00:01.236) 0:25:44.087 ****** 2026-02-03 06:21:12.717596 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.717608 | orchestrator | 2026-02-03 06:21:12.717618 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:21:12.717630 | orchestrator | Tuesday 03 February 2026 06:20:32 +0000 (0:00:01.234) 0:25:45.322 ****** 2026-02-03 06:21:12.717640 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.717651 | orchestrator | 2026-02-03 06:21:12.717662 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:21:12.717675 | orchestrator | Tuesday 03 February 2026 06:20:33 +0000 (0:00:01.212) 0:25:46.535 ****** 2026-02-03 06:21:12.717687 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.717701 | orchestrator | 2026-02-03 06:21:12.717714 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:21:12.717726 | orchestrator | Tuesday 03 February 2026 06:20:34 +0000 (0:00:01.173) 0:25:47.708 ****** 2026-02-03 06:21:12.717739 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.717751 | orchestrator | 2026-02-03 06:21:12.717764 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:21:12.717777 | orchestrator | Tuesday 03 February 2026 06:20:35 +0000 (0:00:01.215) 0:25:48.924 ****** 2026-02-03 06:21:12.717790 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.717803 | orchestrator | 2026-02-03 06:21:12.717815 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:21:12.717828 | orchestrator | Tuesday 03 February 2026 06:20:36 +0000 (0:00:01.252) 0:25:50.177 ****** 2026-02-03 06:21:12.717841 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:21:12.717854 | orchestrator | 2026-02-03 06:21:12.717866 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:21:12.717878 | orchestrator | Tuesday 03 February 2026 06:20:39 +0000 (0:00:02.754) 0:25:52.932 ****** 2026-02-03 06:21:12.717892 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:21:12.717905 | orchestrator | 2026-02-03 06:21:12.717918 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:21:12.717931 | orchestrator | Tuesday 03 February 2026 06:20:40 +0000 (0:00:01.248) 0:25:54.181 ****** 2026-02-03 06:21:12.717944 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-03 06:21:12.717956 | orchestrator | 2026-02-03 06:21:12.717969 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:21:12.717981 | orchestrator | Tuesday 03 February 2026 06:20:42 +0000 (0:00:01.243) 0:25:55.425 ****** 2026-02-03 06:21:12.717993 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718006 | orchestrator | 2026-02-03 06:21:12.718102 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:21:12.718114 | orchestrator | Tuesday 03 February 2026 06:20:43 +0000 (0:00:01.258) 0:25:56.683 ****** 2026-02-03 06:21:12.718125 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718136 | orchestrator | 2026-02-03 06:21:12.718156 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:21:12.718167 | orchestrator | Tuesday 03 February 2026 06:20:44 +0000 (0:00:01.197) 0:25:57.881 ****** 2026-02-03 06:21:12.718178 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718189 | orchestrator | 2026-02-03 06:21:12.718200 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:21:12.718211 | orchestrator | Tuesday 03 February 2026 06:20:46 +0000 (0:00:01.337) 0:25:59.218 ****** 2026-02-03 06:21:12.718221 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718232 | orchestrator | 2026-02-03 06:21:12.718243 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:21:12.718254 | orchestrator | Tuesday 03 February 2026 06:20:47 +0000 (0:00:01.183) 0:26:00.402 ****** 2026-02-03 06:21:12.718275 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718286 | orchestrator | 2026-02-03 06:21:12.718297 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:21:12.718308 | orchestrator | Tuesday 03 February 2026 06:20:48 +0000 (0:00:01.199) 0:26:01.601 ****** 2026-02-03 06:21:12.718336 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718348 | orchestrator | 2026-02-03 06:21:12.718366 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:21:12.718378 | orchestrator | Tuesday 03 February 2026 06:20:49 +0000 (0:00:01.207) 0:26:02.809 ****** 2026-02-03 06:21:12.718389 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718400 | orchestrator | 2026-02-03 06:21:12.718411 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:21:12.718421 | orchestrator | Tuesday 03 February 2026 06:20:50 +0000 (0:00:01.282) 0:26:04.091 ****** 2026-02-03 06:21:12.718432 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718443 | orchestrator | 2026-02-03 06:21:12.718454 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:21:12.718465 | orchestrator | Tuesday 03 February 2026 06:20:52 +0000 (0:00:01.221) 0:26:05.313 ****** 2026-02-03 06:21:12.718476 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:21:12.718487 | orchestrator | 2026-02-03 06:21:12.718498 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:21:12.718509 | orchestrator | Tuesday 03 February 2026 06:20:53 +0000 (0:00:01.331) 0:26:06.644 ****** 2026-02-03 06:21:12.718520 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-03 06:21:12.718531 | orchestrator | 2026-02-03 06:21:12.718542 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:21:12.718553 | orchestrator | Tuesday 03 February 2026 06:20:54 +0000 (0:00:01.153) 0:26:07.798 ****** 2026-02-03 06:21:12.718564 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-03 06:21:12.718575 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-03 06:21:12.718586 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-03 06:21:12.718597 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-03 06:21:12.718608 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-03 06:21:12.718619 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-03 06:21:12.718630 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-03 06:21:12.718641 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:21:12.718652 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:21:12.718663 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:21:12.718674 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:21:12.718684 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:21:12.718695 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:21:12.718706 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:21:12.718717 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-03 06:21:12.718728 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-03 06:21:12.718739 | orchestrator | 2026-02-03 06:21:12.718750 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:21:12.718761 | orchestrator | Tuesday 03 February 2026 06:21:01 +0000 (0:00:07.311) 0:26:15.109 ****** 2026-02-03 06:21:12.718772 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718783 | orchestrator | 2026-02-03 06:21:12.718794 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:21:12.718805 | orchestrator | Tuesday 03 February 2026 06:21:03 +0000 (0:00:01.241) 0:26:16.351 ****** 2026-02-03 06:21:12.718815 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718833 | orchestrator | 2026-02-03 06:21:12.718844 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:21:12.718855 | orchestrator | Tuesday 03 February 2026 06:21:04 +0000 (0:00:01.199) 0:26:17.551 ****** 2026-02-03 06:21:12.718866 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718877 | orchestrator | 2026-02-03 06:21:12.718888 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:21:12.718899 | orchestrator | Tuesday 03 February 2026 06:21:05 +0000 (0:00:01.271) 0:26:18.823 ****** 2026-02-03 06:21:12.718909 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718920 | orchestrator | 2026-02-03 06:21:12.718931 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:21:12.718942 | orchestrator | Tuesday 03 February 2026 06:21:06 +0000 (0:00:01.179) 0:26:20.003 ****** 2026-02-03 06:21:12.718953 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.718964 | orchestrator | 2026-02-03 06:21:12.718975 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:21:12.718986 | orchestrator | Tuesday 03 February 2026 06:21:07 +0000 (0:00:01.160) 0:26:21.163 ****** 2026-02-03 06:21:12.718997 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.719008 | orchestrator | 2026-02-03 06:21:12.719019 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:21:12.719030 | orchestrator | Tuesday 03 February 2026 06:21:09 +0000 (0:00:01.152) 0:26:22.316 ****** 2026-02-03 06:21:12.719041 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.719052 | orchestrator | 2026-02-03 06:21:12.719188 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:21:12.719205 | orchestrator | Tuesday 03 February 2026 06:21:10 +0000 (0:00:01.178) 0:26:23.494 ****** 2026-02-03 06:21:12.719216 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.719227 | orchestrator | 2026-02-03 06:21:12.719238 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:21:12.719249 | orchestrator | Tuesday 03 February 2026 06:21:11 +0000 (0:00:01.224) 0:26:24.719 ****** 2026-02-03 06:21:12.719260 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:21:12.719271 | orchestrator | 2026-02-03 06:21:12.719281 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:21:12.719302 | orchestrator | Tuesday 03 February 2026 06:21:12 +0000 (0:00:01.169) 0:26:25.888 ****** 2026-02-03 06:22:13.076435 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.076574 | orchestrator | 2026-02-03 06:22:13.076597 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:22:13.076614 | orchestrator | Tuesday 03 February 2026 06:21:14 +0000 (0:00:01.367) 0:26:27.255 ****** 2026-02-03 06:22:13.076628 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.076642 | orchestrator | 2026-02-03 06:22:13.076657 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:22:13.076672 | orchestrator | Tuesday 03 February 2026 06:21:15 +0000 (0:00:01.231) 0:26:28.487 ****** 2026-02-03 06:22:13.076688 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.076705 | orchestrator | 2026-02-03 06:22:13.076720 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:22:13.076736 | orchestrator | Tuesday 03 February 2026 06:21:16 +0000 (0:00:01.244) 0:26:29.731 ****** 2026-02-03 06:22:13.076751 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.076766 | orchestrator | 2026-02-03 06:22:13.076781 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:22:13.076797 | orchestrator | Tuesday 03 February 2026 06:21:17 +0000 (0:00:01.327) 0:26:31.059 ****** 2026-02-03 06:22:13.076812 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.076826 | orchestrator | 2026-02-03 06:22:13.076841 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:22:13.076856 | orchestrator | Tuesday 03 February 2026 06:21:19 +0000 (0:00:01.193) 0:26:32.252 ****** 2026-02-03 06:22:13.076900 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.076916 | orchestrator | 2026-02-03 06:22:13.076932 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:22:13.076947 | orchestrator | Tuesday 03 February 2026 06:21:20 +0000 (0:00:01.295) 0:26:33.547 ****** 2026-02-03 06:22:13.076961 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.076976 | orchestrator | 2026-02-03 06:22:13.076993 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:22:13.077010 | orchestrator | Tuesday 03 February 2026 06:21:21 +0000 (0:00:01.186) 0:26:34.734 ****** 2026-02-03 06:22:13.077026 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077089 | orchestrator | 2026-02-03 06:22:13.077110 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:22:13.077129 | orchestrator | Tuesday 03 February 2026 06:21:22 +0000 (0:00:01.152) 0:26:35.887 ****** 2026-02-03 06:22:13.077147 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077165 | orchestrator | 2026-02-03 06:22:13.077182 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:22:13.077199 | orchestrator | Tuesday 03 February 2026 06:21:23 +0000 (0:00:01.232) 0:26:37.119 ****** 2026-02-03 06:22:13.077216 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077233 | orchestrator | 2026-02-03 06:22:13.077251 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:22:13.077268 | orchestrator | Tuesday 03 February 2026 06:21:25 +0000 (0:00:01.163) 0:26:38.283 ****** 2026-02-03 06:22:13.077284 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077302 | orchestrator | 2026-02-03 06:22:13.077320 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:22:13.077336 | orchestrator | Tuesday 03 February 2026 06:21:26 +0000 (0:00:01.222) 0:26:39.505 ****** 2026-02-03 06:22:13.077353 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077371 | orchestrator | 2026-02-03 06:22:13.077389 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:22:13.077407 | orchestrator | Tuesday 03 February 2026 06:21:27 +0000 (0:00:01.223) 0:26:40.729 ****** 2026-02-03 06:22:13.077424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 06:22:13.077442 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:22:13.077459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:22:13.077476 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077493 | orchestrator | 2026-02-03 06:22:13.077511 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:22:13.077528 | orchestrator | Tuesday 03 February 2026 06:21:29 +0000 (0:00:01.925) 0:26:42.654 ****** 2026-02-03 06:22:13.077546 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 06:22:13.077564 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:22:13.077581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:22:13.077599 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077616 | orchestrator | 2026-02-03 06:22:13.077633 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:22:13.077644 | orchestrator | Tuesday 03 February 2026 06:21:31 +0000 (0:00:01.902) 0:26:44.557 ****** 2026-02-03 06:22:13.077659 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-03 06:22:13.077675 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:22:13.077691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:22:13.077706 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077723 | orchestrator | 2026-02-03 06:22:13.077739 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:22:13.077749 | orchestrator | Tuesday 03 February 2026 06:21:33 +0000 (0:00:02.144) 0:26:46.702 ****** 2026-02-03 06:22:13.077775 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077785 | orchestrator | 2026-02-03 06:22:13.077795 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:22:13.077804 | orchestrator | Tuesday 03 February 2026 06:21:34 +0000 (0:00:01.235) 0:26:47.937 ****** 2026-02-03 06:22:13.077815 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-03 06:22:13.077824 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.077834 | orchestrator | 2026-02-03 06:22:13.077844 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:22:13.077877 | orchestrator | Tuesday 03 February 2026 06:21:36 +0000 (0:00:01.429) 0:26:49.366 ****** 2026-02-03 06:22:13.077906 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:22:13.077923 | orchestrator | 2026-02-03 06:22:13.077938 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:22:13.077955 | orchestrator | Tuesday 03 February 2026 06:21:38 +0000 (0:00:01.835) 0:26:51.202 ****** 2026-02-03 06:22:13.077972 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:22:13.077990 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:22:13.078003 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:22:13.078012 | orchestrator | 2026-02-03 06:22:13.078130 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-03 06:22:13.078141 | orchestrator | Tuesday 03 February 2026 06:21:39 +0000 (0:00:01.809) 0:26:53.012 ****** 2026-02-03 06:22:13.078150 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-03 06:22:13.078160 | orchestrator | 2026-02-03 06:22:13.078170 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-03 06:22:13.078179 | orchestrator | Tuesday 03 February 2026 06:21:41 +0000 (0:00:01.519) 0:26:54.532 ****** 2026-02-03 06:22:13.078187 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:22:13.078195 | orchestrator | 2026-02-03 06:22:13.078203 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-03 06:22:13.078211 | orchestrator | Tuesday 03 February 2026 06:21:42 +0000 (0:00:01.581) 0:26:56.114 ****** 2026-02-03 06:22:13.078219 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.078227 | orchestrator | 2026-02-03 06:22:13.078239 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-03 06:22:13.078253 | orchestrator | Tuesday 03 February 2026 06:21:44 +0000 (0:00:01.203) 0:26:57.318 ****** 2026-02-03 06:22:13.078265 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-03 06:22:13.078279 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-03 06:22:13.078291 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-03 06:22:13.078303 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-03 06:22:13.078315 | orchestrator | 2026-02-03 06:22:13.078328 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-03 06:22:13.078341 | orchestrator | Tuesday 03 February 2026 06:21:52 +0000 (0:00:07.905) 0:27:05.223 ****** 2026-02-03 06:22:13.078354 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:22:13.078368 | orchestrator | 2026-02-03 06:22:13.078380 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-03 06:22:13.078393 | orchestrator | Tuesday 03 February 2026 06:21:53 +0000 (0:00:01.233) 0:27:06.457 ****** 2026-02-03 06:22:13.078407 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-03 06:22:13.078421 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 06:22:13.078430 | orchestrator | 2026-02-03 06:22:13.078437 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-03 06:22:13.078445 | orchestrator | Tuesday 03 February 2026 06:21:57 +0000 (0:00:03.868) 0:27:10.326 ****** 2026-02-03 06:22:13.078453 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-03 06:22:13.078461 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-03 06:22:13.078477 | orchestrator | 2026-02-03 06:22:13.078485 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-03 06:22:13.078492 | orchestrator | Tuesday 03 February 2026 06:21:59 +0000 (0:00:02.076) 0:27:12.402 ****** 2026-02-03 06:22:13.078500 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:22:13.078508 | orchestrator | 2026-02-03 06:22:13.078516 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-03 06:22:13.078523 | orchestrator | Tuesday 03 February 2026 06:22:00 +0000 (0:00:01.621) 0:27:14.023 ****** 2026-02-03 06:22:13.078531 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.078539 | orchestrator | 2026-02-03 06:22:13.078547 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-03 06:22:13.078555 | orchestrator | Tuesday 03 February 2026 06:22:02 +0000 (0:00:01.285) 0:27:15.308 ****** 2026-02-03 06:22:13.078562 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.078570 | orchestrator | 2026-02-03 06:22:13.078578 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-03 06:22:13.078586 | orchestrator | Tuesday 03 February 2026 06:22:03 +0000 (0:00:01.190) 0:27:16.499 ****** 2026-02-03 06:22:13.078594 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-03 06:22:13.078601 | orchestrator | 2026-02-03 06:22:13.078609 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-03 06:22:13.078617 | orchestrator | Tuesday 03 February 2026 06:22:04 +0000 (0:00:01.535) 0:27:18.035 ****** 2026-02-03 06:22:13.078625 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.078632 | orchestrator | 2026-02-03 06:22:13.078640 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-03 06:22:13.078648 | orchestrator | Tuesday 03 February 2026 06:22:06 +0000 (0:00:01.242) 0:27:19.278 ****** 2026-02-03 06:22:13.078655 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:13.078663 | orchestrator | 2026-02-03 06:22:13.078671 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-03 06:22:13.078679 | orchestrator | Tuesday 03 February 2026 06:22:07 +0000 (0:00:01.174) 0:27:20.452 ****** 2026-02-03 06:22:13.078687 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-03 06:22:13.078694 | orchestrator | 2026-02-03 06:22:13.078702 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-03 06:22:13.078710 | orchestrator | Tuesday 03 February 2026 06:22:08 +0000 (0:00:01.573) 0:27:22.025 ****** 2026-02-03 06:22:13.078718 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:22:13.078725 | orchestrator | 2026-02-03 06:22:13.078733 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-03 06:22:13.078741 | orchestrator | Tuesday 03 February 2026 06:22:11 +0000 (0:00:02.181) 0:27:24.207 ****** 2026-02-03 06:22:13.078749 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:22:13.078756 | orchestrator | 2026-02-03 06:22:13.078778 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-03 06:22:59.966951 | orchestrator | Tuesday 03 February 2026 06:22:13 +0000 (0:00:02.044) 0:27:26.252 ****** 2026-02-03 06:22:59.967087 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:22:59.967102 | orchestrator | 2026-02-03 06:22:59.967110 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-03 06:22:59.967117 | orchestrator | Tuesday 03 February 2026 06:22:15 +0000 (0:00:02.618) 0:27:28.871 ****** 2026-02-03 06:22:59.967124 | orchestrator | changed: [testbed-node-0] 2026-02-03 06:22:59.967131 | orchestrator | 2026-02-03 06:22:59.967138 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-03 06:22:59.967145 | orchestrator | Tuesday 03 February 2026 06:22:19 +0000 (0:00:04.188) 0:27:33.060 ****** 2026-02-03 06:22:59.967151 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:22:59.967157 | orchestrator | 2026-02-03 06:22:59.967164 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-03 06:22:59.967170 | orchestrator | 2026-02-03 06:22:59.967177 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-03 06:22:59.967202 | orchestrator | Tuesday 03 February 2026 06:22:20 +0000 (0:00:01.091) 0:27:34.151 ****** 2026-02-03 06:22:59.967208 | orchestrator | changed: [testbed-node-1] 2026-02-03 06:22:59.967215 | orchestrator | 2026-02-03 06:22:59.967221 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-03 06:22:59.967227 | orchestrator | Tuesday 03 February 2026 06:22:33 +0000 (0:00:12.684) 0:27:46.836 ****** 2026-02-03 06:22:59.967233 | orchestrator | changed: [testbed-node-1] 2026-02-03 06:22:59.967240 | orchestrator | 2026-02-03 06:22:59.967246 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:22:59.967253 | orchestrator | Tuesday 03 February 2026 06:22:35 +0000 (0:00:02.269) 0:27:49.106 ****** 2026-02-03 06:22:59.967259 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-03 06:22:59.967266 | orchestrator | 2026-02-03 06:22:59.967272 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:22:59.967278 | orchestrator | Tuesday 03 February 2026 06:22:37 +0000 (0:00:01.210) 0:27:50.317 ****** 2026-02-03 06:22:59.967284 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:22:59.967291 | orchestrator | 2026-02-03 06:22:59.967297 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:22:59.967303 | orchestrator | Tuesday 03 February 2026 06:22:38 +0000 (0:00:01.545) 0:27:51.862 ****** 2026-02-03 06:22:59.967309 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:22:59.967316 | orchestrator | 2026-02-03 06:22:59.967322 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:22:59.967328 | orchestrator | Tuesday 03 February 2026 06:22:39 +0000 (0:00:01.175) 0:27:53.038 ****** 2026-02-03 06:22:59.967334 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:22:59.967340 | orchestrator | 2026-02-03 06:22:59.967347 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:22:59.967353 | orchestrator | Tuesday 03 February 2026 06:22:41 +0000 (0:00:01.524) 0:27:54.562 ****** 2026-02-03 06:22:59.967359 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:22:59.967365 | orchestrator | 2026-02-03 06:22:59.967372 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:22:59.967378 | orchestrator | Tuesday 03 February 2026 06:22:42 +0000 (0:00:01.286) 0:27:55.849 ****** 2026-02-03 06:22:59.967384 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:22:59.967390 | orchestrator | 2026-02-03 06:22:59.967397 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:22:59.967403 | orchestrator | Tuesday 03 February 2026 06:22:44 +0000 (0:00:01.355) 0:27:57.205 ****** 2026-02-03 06:22:59.967409 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:22:59.967416 | orchestrator | 2026-02-03 06:22:59.967422 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:22:59.967429 | orchestrator | Tuesday 03 February 2026 06:22:45 +0000 (0:00:01.247) 0:27:58.453 ****** 2026-02-03 06:22:59.967435 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:22:59.967441 | orchestrator | 2026-02-03 06:22:59.967448 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:22:59.967454 | orchestrator | Tuesday 03 February 2026 06:22:46 +0000 (0:00:01.282) 0:27:59.735 ****** 2026-02-03 06:22:59.967460 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:22:59.967467 | orchestrator | 2026-02-03 06:22:59.967475 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:22:59.967482 | orchestrator | Tuesday 03 February 2026 06:22:47 +0000 (0:00:01.264) 0:28:01.000 ****** 2026-02-03 06:22:59.967489 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:22:59.967497 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:22:59.967505 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:22:59.967512 | orchestrator | 2026-02-03 06:22:59.967519 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:22:59.967534 | orchestrator | Tuesday 03 February 2026 06:22:49 +0000 (0:00:01.822) 0:28:02.823 ****** 2026-02-03 06:22:59.967542 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:22:59.967548 | orchestrator | 2026-02-03 06:22:59.967556 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:22:59.967563 | orchestrator | Tuesday 03 February 2026 06:22:51 +0000 (0:00:01.377) 0:28:04.200 ****** 2026-02-03 06:22:59.967571 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:22:59.967579 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:22:59.967586 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:22:59.967594 | orchestrator | 2026-02-03 06:22:59.967601 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:22:59.967609 | orchestrator | Tuesday 03 February 2026 06:22:53 +0000 (0:00:02.956) 0:28:07.157 ****** 2026-02-03 06:22:59.967620 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 06:22:59.967663 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 06:22:59.967676 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 06:22:59.967689 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:22:59.967699 | orchestrator | 2026-02-03 06:22:59.967710 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:22:59.967720 | orchestrator | Tuesday 03 February 2026 06:22:55 +0000 (0:00:01.514) 0:28:08.672 ****** 2026-02-03 06:22:59.967735 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:22:59.967748 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:22:59.967758 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:22:59.967765 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:22:59.967773 | orchestrator | 2026-02-03 06:22:59.967780 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:22:59.967787 | orchestrator | Tuesday 03 February 2026 06:22:57 +0000 (0:00:02.032) 0:28:10.704 ****** 2026-02-03 06:22:59.967797 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:22:59.967807 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:22:59.967816 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:22:59.967833 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:22:59.967843 | orchestrator | 2026-02-03 06:22:59.967853 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:22:59.967863 | orchestrator | Tuesday 03 February 2026 06:22:58 +0000 (0:00:01.200) 0:28:11.905 ****** 2026-02-03 06:22:59.967876 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:22:51.546502', 'end': '2026-02-03 06:22:51.596135', 'delta': '0:00:00.049633', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:22:59.967902 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:22:52.161934', 'end': '2026-02-03 06:22:52.207378', 'delta': '0:00:00.045444', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:23:19.703486 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:22:52.726330', 'end': '2026-02-03 06:22:52.779269', 'delta': '0:00:00.052939', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:23:19.703602 | orchestrator | 2026-02-03 06:23:19.703620 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:23:19.703633 | orchestrator | Tuesday 03 February 2026 06:22:59 +0000 (0:00:01.234) 0:28:13.139 ****** 2026-02-03 06:23:19.703645 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:23:19.703658 | orchestrator | 2026-02-03 06:23:19.703669 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:23:19.703681 | orchestrator | Tuesday 03 February 2026 06:23:01 +0000 (0:00:01.400) 0:28:14.540 ****** 2026-02-03 06:23:19.703693 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.703705 | orchestrator | 2026-02-03 06:23:19.703716 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:23:19.703727 | orchestrator | Tuesday 03 February 2026 06:23:02 +0000 (0:00:01.396) 0:28:15.937 ****** 2026-02-03 06:23:19.703738 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:23:19.703749 | orchestrator | 2026-02-03 06:23:19.703760 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:23:19.703771 | orchestrator | Tuesday 03 February 2026 06:23:04 +0000 (0:00:01.316) 0:28:17.254 ****** 2026-02-03 06:23:19.703782 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:23:19.703793 | orchestrator | 2026-02-03 06:23:19.703804 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:23:19.703839 | orchestrator | Tuesday 03 February 2026 06:23:06 +0000 (0:00:02.139) 0:28:19.393 ****** 2026-02-03 06:23:19.703850 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:23:19.703861 | orchestrator | 2026-02-03 06:23:19.703872 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:23:19.703883 | orchestrator | Tuesday 03 February 2026 06:23:07 +0000 (0:00:01.185) 0:28:20.579 ****** 2026-02-03 06:23:19.703894 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.703905 | orchestrator | 2026-02-03 06:23:19.703916 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:23:19.703927 | orchestrator | Tuesday 03 February 2026 06:23:08 +0000 (0:00:01.199) 0:28:21.779 ****** 2026-02-03 06:23:19.703937 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.703948 | orchestrator | 2026-02-03 06:23:19.703959 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:23:19.703970 | orchestrator | Tuesday 03 February 2026 06:23:09 +0000 (0:00:01.309) 0:28:23.089 ****** 2026-02-03 06:23:19.703980 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.703991 | orchestrator | 2026-02-03 06:23:19.704024 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:23:19.704038 | orchestrator | Tuesday 03 February 2026 06:23:11 +0000 (0:00:01.169) 0:28:24.258 ****** 2026-02-03 06:23:19.704050 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.704062 | orchestrator | 2026-02-03 06:23:19.704075 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:23:19.704087 | orchestrator | Tuesday 03 February 2026 06:23:12 +0000 (0:00:01.276) 0:28:25.534 ****** 2026-02-03 06:23:19.704100 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.704112 | orchestrator | 2026-02-03 06:23:19.704125 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:23:19.704138 | orchestrator | Tuesday 03 February 2026 06:23:13 +0000 (0:00:01.178) 0:28:26.713 ****** 2026-02-03 06:23:19.704151 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.704164 | orchestrator | 2026-02-03 06:23:19.704177 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:23:19.704190 | orchestrator | Tuesday 03 February 2026 06:23:14 +0000 (0:00:01.159) 0:28:27.873 ****** 2026-02-03 06:23:19.704202 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.704215 | orchestrator | 2026-02-03 06:23:19.704227 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:23:19.704240 | orchestrator | Tuesday 03 February 2026 06:23:15 +0000 (0:00:01.268) 0:28:29.141 ****** 2026-02-03 06:23:19.704253 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.704266 | orchestrator | 2026-02-03 06:23:19.704278 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:23:19.704291 | orchestrator | Tuesday 03 February 2026 06:23:17 +0000 (0:00:01.216) 0:28:30.358 ****** 2026-02-03 06:23:19.704304 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:19.704317 | orchestrator | 2026-02-03 06:23:19.704329 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:23:19.704342 | orchestrator | Tuesday 03 February 2026 06:23:18 +0000 (0:00:01.208) 0:28:31.566 ****** 2026-02-03 06:23:19.704392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:23:19.704409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:23:19.704431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:23:19.704444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:23:19.704456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:23:19.704468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:23:19.704480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:23:19.704510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '24352e15', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:23:21.055075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:23:21.055181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:23:21.055199 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:23:21.055213 | orchestrator | 2026-02-03 06:23:21.055225 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:23:21.055237 | orchestrator | Tuesday 03 February 2026 06:23:19 +0000 (0:00:01.302) 0:28:32.869 ****** 2026-02-03 06:23:21.055252 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:23:21.055267 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:23:21.055279 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:23:21.055310 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:23:21.055366 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:23:21.055379 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:23:21.055391 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:23:21.055412 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '24352e15', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1', 'scsi-SQEMU_QEMU_HARDDISK_24352e15-6dea-4a0f-b242-96c62f6cf142-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:23:21.055442 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:24:00.924611 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:24:00.924730 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.924748 | orchestrator | 2026-02-03 06:24:00.924762 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:24:00.924775 | orchestrator | Tuesday 03 February 2026 06:23:21 +0000 (0:00:01.361) 0:28:34.230 ****** 2026-02-03 06:24:00.924786 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:00.924799 | orchestrator | 2026-02-03 06:24:00.924810 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:24:00.924822 | orchestrator | Tuesday 03 February 2026 06:23:22 +0000 (0:00:01.564) 0:28:35.794 ****** 2026-02-03 06:24:00.924834 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:00.924845 | orchestrator | 2026-02-03 06:24:00.924856 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:24:00.924867 | orchestrator | Tuesday 03 February 2026 06:23:23 +0000 (0:00:01.198) 0:28:36.993 ****** 2026-02-03 06:24:00.924878 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:00.924889 | orchestrator | 2026-02-03 06:24:00.924900 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:24:00.924911 | orchestrator | Tuesday 03 February 2026 06:23:25 +0000 (0:00:01.628) 0:28:38.622 ****** 2026-02-03 06:24:00.924922 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.924933 | orchestrator | 2026-02-03 06:24:00.924944 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:24:00.924955 | orchestrator | Tuesday 03 February 2026 06:23:26 +0000 (0:00:01.225) 0:28:39.847 ****** 2026-02-03 06:24:00.924966 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.925025 | orchestrator | 2026-02-03 06:24:00.925038 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:24:00.925049 | orchestrator | Tuesday 03 February 2026 06:23:27 +0000 (0:00:01.321) 0:28:41.169 ****** 2026-02-03 06:24:00.925059 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.925071 | orchestrator | 2026-02-03 06:24:00.925082 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:24:00.925118 | orchestrator | Tuesday 03 February 2026 06:23:29 +0000 (0:00:01.178) 0:28:42.348 ****** 2026-02-03 06:24:00.925130 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-03 06:24:00.925143 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:24:00.925156 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-03 06:24:00.925170 | orchestrator | 2026-02-03 06:24:00.925184 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:24:00.925197 | orchestrator | Tuesday 03 February 2026 06:23:31 +0000 (0:00:01.910) 0:28:44.258 ****** 2026-02-03 06:24:00.925210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-03 06:24:00.925224 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-03 06:24:00.925237 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-03 06:24:00.925250 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.925264 | orchestrator | 2026-02-03 06:24:00.925277 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:24:00.925290 | orchestrator | Tuesday 03 February 2026 06:23:32 +0000 (0:00:01.276) 0:28:45.534 ****** 2026-02-03 06:24:00.925303 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.925316 | orchestrator | 2026-02-03 06:24:00.925345 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:24:00.925359 | orchestrator | Tuesday 03 February 2026 06:23:33 +0000 (0:00:01.265) 0:28:46.799 ****** 2026-02-03 06:24:00.925372 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:24:00.925386 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:24:00.925400 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:24:00.925413 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:24:00.925426 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:24:00.925439 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:24:00.925452 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:24:00.925465 | orchestrator | 2026-02-03 06:24:00.925478 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:24:00.925491 | orchestrator | Tuesday 03 February 2026 06:23:35 +0000 (0:00:02.364) 0:28:49.164 ****** 2026-02-03 06:24:00.925503 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:24:00.925514 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:24:00.925525 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:24:00.925536 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:24:00.925566 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:24:00.925578 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:24:00.925589 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:24:00.925600 | orchestrator | 2026-02-03 06:24:00.925611 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:24:00.925622 | orchestrator | Tuesday 03 February 2026 06:23:38 +0000 (0:00:02.662) 0:28:51.826 ****** 2026-02-03 06:24:00.925633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-03 06:24:00.925644 | orchestrator | 2026-02-03 06:24:00.925655 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:24:00.925666 | orchestrator | Tuesday 03 February 2026 06:23:40 +0000 (0:00:01.521) 0:28:53.348 ****** 2026-02-03 06:24:00.925677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-03 06:24:00.925697 | orchestrator | 2026-02-03 06:24:00.925708 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:24:00.925719 | orchestrator | Tuesday 03 February 2026 06:23:41 +0000 (0:00:01.196) 0:28:54.546 ****** 2026-02-03 06:24:00.925730 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:00.925741 | orchestrator | 2026-02-03 06:24:00.925752 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:24:00.925763 | orchestrator | Tuesday 03 February 2026 06:23:42 +0000 (0:00:01.578) 0:28:56.125 ****** 2026-02-03 06:24:00.925774 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.925785 | orchestrator | 2026-02-03 06:24:00.925796 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:24:00.925807 | orchestrator | Tuesday 03 February 2026 06:23:44 +0000 (0:00:01.213) 0:28:57.338 ****** 2026-02-03 06:24:00.925818 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.925829 | orchestrator | 2026-02-03 06:24:00.925840 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:24:00.925851 | orchestrator | Tuesday 03 February 2026 06:23:45 +0000 (0:00:01.205) 0:28:58.544 ****** 2026-02-03 06:24:00.925862 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.925873 | orchestrator | 2026-02-03 06:24:00.925884 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:24:00.925895 | orchestrator | Tuesday 03 February 2026 06:23:46 +0000 (0:00:01.244) 0:28:59.789 ****** 2026-02-03 06:24:00.925906 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:00.925917 | orchestrator | 2026-02-03 06:24:00.925928 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:24:00.925939 | orchestrator | Tuesday 03 February 2026 06:23:48 +0000 (0:00:01.744) 0:29:01.533 ****** 2026-02-03 06:24:00.925950 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.925961 | orchestrator | 2026-02-03 06:24:00.925972 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:24:00.926005 | orchestrator | Tuesday 03 February 2026 06:23:49 +0000 (0:00:01.211) 0:29:02.745 ****** 2026-02-03 06:24:00.926070 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.926083 | orchestrator | 2026-02-03 06:24:00.926094 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:24:00.926105 | orchestrator | Tuesday 03 February 2026 06:23:50 +0000 (0:00:01.155) 0:29:03.900 ****** 2026-02-03 06:24:00.926115 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:00.926135 | orchestrator | 2026-02-03 06:24:00.926146 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:24:00.926157 | orchestrator | Tuesday 03 February 2026 06:23:52 +0000 (0:00:01.724) 0:29:05.625 ****** 2026-02-03 06:24:00.926169 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:00.926180 | orchestrator | 2026-02-03 06:24:00.926191 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:24:00.926202 | orchestrator | Tuesday 03 February 2026 06:23:54 +0000 (0:00:01.687) 0:29:07.312 ****** 2026-02-03 06:24:00.926212 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.926223 | orchestrator | 2026-02-03 06:24:00.926234 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:24:00.926252 | orchestrator | Tuesday 03 February 2026 06:23:55 +0000 (0:00:00.938) 0:29:08.251 ****** 2026-02-03 06:24:00.926263 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:00.926274 | orchestrator | 2026-02-03 06:24:00.926285 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:24:00.926296 | orchestrator | Tuesday 03 February 2026 06:23:55 +0000 (0:00:00.902) 0:29:09.154 ****** 2026-02-03 06:24:00.926307 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.926318 | orchestrator | 2026-02-03 06:24:00.926329 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:24:00.926340 | orchestrator | Tuesday 03 February 2026 06:23:56 +0000 (0:00:00.816) 0:29:09.971 ****** 2026-02-03 06:24:00.926351 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.926370 | orchestrator | 2026-02-03 06:24:00.926381 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:24:00.926392 | orchestrator | Tuesday 03 February 2026 06:23:57 +0000 (0:00:00.809) 0:29:10.780 ****** 2026-02-03 06:24:00.926402 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.926413 | orchestrator | 2026-02-03 06:24:00.926424 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:24:00.926435 | orchestrator | Tuesday 03 February 2026 06:23:58 +0000 (0:00:00.805) 0:29:11.586 ****** 2026-02-03 06:24:00.926446 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.926457 | orchestrator | 2026-02-03 06:24:00.926468 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:24:00.926479 | orchestrator | Tuesday 03 February 2026 06:23:59 +0000 (0:00:00.818) 0:29:12.405 ****** 2026-02-03 06:24:00.926490 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:00.926501 | orchestrator | 2026-02-03 06:24:00.926512 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:24:00.926523 | orchestrator | Tuesday 03 February 2026 06:24:00 +0000 (0:00:00.806) 0:29:13.211 ****** 2026-02-03 06:24:00.926542 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:46.104456 | orchestrator | 2026-02-03 06:24:46.104548 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:24:46.104561 | orchestrator | Tuesday 03 February 2026 06:24:00 +0000 (0:00:00.887) 0:29:14.099 ****** 2026-02-03 06:24:46.104568 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:46.104576 | orchestrator | 2026-02-03 06:24:46.104583 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:24:46.104590 | orchestrator | Tuesday 03 February 2026 06:24:01 +0000 (0:00:00.882) 0:29:14.981 ****** 2026-02-03 06:24:46.104596 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:46.104602 | orchestrator | 2026-02-03 06:24:46.104609 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:24:46.104615 | orchestrator | Tuesday 03 February 2026 06:24:02 +0000 (0:00:00.912) 0:29:15.893 ****** 2026-02-03 06:24:46.104623 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104630 | orchestrator | 2026-02-03 06:24:46.104636 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:24:46.104643 | orchestrator | Tuesday 03 February 2026 06:24:03 +0000 (0:00:00.810) 0:29:16.704 ****** 2026-02-03 06:24:46.104650 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104657 | orchestrator | 2026-02-03 06:24:46.104663 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:24:46.104670 | orchestrator | Tuesday 03 February 2026 06:24:04 +0000 (0:00:00.794) 0:29:17.498 ****** 2026-02-03 06:24:46.104677 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104683 | orchestrator | 2026-02-03 06:24:46.104689 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:24:46.104696 | orchestrator | Tuesday 03 February 2026 06:24:05 +0000 (0:00:00.875) 0:29:18.374 ****** 2026-02-03 06:24:46.104702 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104709 | orchestrator | 2026-02-03 06:24:46.104717 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:24:46.104724 | orchestrator | Tuesday 03 February 2026 06:24:06 +0000 (0:00:01.100) 0:29:19.475 ****** 2026-02-03 06:24:46.104730 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104737 | orchestrator | 2026-02-03 06:24:46.104744 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:24:46.104752 | orchestrator | Tuesday 03 February 2026 06:24:07 +0000 (0:00:00.885) 0:29:20.360 ****** 2026-02-03 06:24:46.104762 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104773 | orchestrator | 2026-02-03 06:24:46.104780 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:24:46.104787 | orchestrator | Tuesday 03 February 2026 06:24:07 +0000 (0:00:00.815) 0:29:21.176 ****** 2026-02-03 06:24:46.104793 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104822 | orchestrator | 2026-02-03 06:24:46.104831 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:24:46.104840 | orchestrator | Tuesday 03 February 2026 06:24:08 +0000 (0:00:00.791) 0:29:21.967 ****** 2026-02-03 06:24:46.104846 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104853 | orchestrator | 2026-02-03 06:24:46.104860 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:24:46.104866 | orchestrator | Tuesday 03 February 2026 06:24:09 +0000 (0:00:00.794) 0:29:22.762 ****** 2026-02-03 06:24:46.104873 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104880 | orchestrator | 2026-02-03 06:24:46.104887 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:24:46.104894 | orchestrator | Tuesday 03 February 2026 06:24:10 +0000 (0:00:00.907) 0:29:23.669 ****** 2026-02-03 06:24:46.104900 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104906 | orchestrator | 2026-02-03 06:24:46.104913 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:24:46.104920 | orchestrator | Tuesday 03 February 2026 06:24:11 +0000 (0:00:00.830) 0:29:24.499 ****** 2026-02-03 06:24:46.104927 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104933 | orchestrator | 2026-02-03 06:24:46.104939 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:24:46.104945 | orchestrator | Tuesday 03 February 2026 06:24:12 +0000 (0:00:00.910) 0:29:25.410 ****** 2026-02-03 06:24:46.104952 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.104989 | orchestrator | 2026-02-03 06:24:46.105011 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:24:46.105019 | orchestrator | Tuesday 03 February 2026 06:24:13 +0000 (0:00:00.841) 0:29:26.252 ****** 2026-02-03 06:24:46.105025 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:46.105032 | orchestrator | 2026-02-03 06:24:46.105039 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:24:46.105046 | orchestrator | Tuesday 03 February 2026 06:24:14 +0000 (0:00:01.734) 0:29:27.986 ****** 2026-02-03 06:24:46.105053 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:46.105059 | orchestrator | 2026-02-03 06:24:46.105065 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:24:46.105072 | orchestrator | Tuesday 03 February 2026 06:24:17 +0000 (0:00:02.270) 0:29:30.256 ****** 2026-02-03 06:24:46.105078 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-03 06:24:46.105085 | orchestrator | 2026-02-03 06:24:46.105092 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:24:46.105098 | orchestrator | Tuesday 03 February 2026 06:24:18 +0000 (0:00:01.405) 0:29:31.662 ****** 2026-02-03 06:24:46.105104 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105110 | orchestrator | 2026-02-03 06:24:46.105116 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:24:46.105123 | orchestrator | Tuesday 03 February 2026 06:24:19 +0000 (0:00:01.207) 0:29:32.870 ****** 2026-02-03 06:24:46.105129 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105135 | orchestrator | 2026-02-03 06:24:46.105142 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:24:46.105148 | orchestrator | Tuesday 03 February 2026 06:24:20 +0000 (0:00:01.269) 0:29:34.139 ****** 2026-02-03 06:24:46.105174 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:24:46.105181 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:24:46.105188 | orchestrator | 2026-02-03 06:24:46.105194 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:24:46.105200 | orchestrator | Tuesday 03 February 2026 06:24:22 +0000 (0:00:02.037) 0:29:36.176 ****** 2026-02-03 06:24:46.105206 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:46.105212 | orchestrator | 2026-02-03 06:24:46.105228 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:24:46.105235 | orchestrator | Tuesday 03 February 2026 06:24:24 +0000 (0:00:01.623) 0:29:37.800 ****** 2026-02-03 06:24:46.105242 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105248 | orchestrator | 2026-02-03 06:24:46.105254 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:24:46.105259 | orchestrator | Tuesday 03 February 2026 06:24:25 +0000 (0:00:01.256) 0:29:39.057 ****** 2026-02-03 06:24:46.105265 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105271 | orchestrator | 2026-02-03 06:24:46.105277 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:24:46.105283 | orchestrator | Tuesday 03 February 2026 06:24:26 +0000 (0:00:00.842) 0:29:39.899 ****** 2026-02-03 06:24:46.105290 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105295 | orchestrator | 2026-02-03 06:24:46.105301 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:24:46.105307 | orchestrator | Tuesday 03 February 2026 06:24:27 +0000 (0:00:00.808) 0:29:40.707 ****** 2026-02-03 06:24:46.105314 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-03 06:24:46.105320 | orchestrator | 2026-02-03 06:24:46.105327 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:24:46.105333 | orchestrator | Tuesday 03 February 2026 06:24:28 +0000 (0:00:01.169) 0:29:41.877 ****** 2026-02-03 06:24:46.105339 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:46.105345 | orchestrator | 2026-02-03 06:24:46.105351 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:24:46.105356 | orchestrator | Tuesday 03 February 2026 06:24:30 +0000 (0:00:01.844) 0:29:43.722 ****** 2026-02-03 06:24:46.105362 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:24:46.105369 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:24:46.105376 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:24:46.105382 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105388 | orchestrator | 2026-02-03 06:24:46.105394 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:24:46.105400 | orchestrator | Tuesday 03 February 2026 06:24:31 +0000 (0:00:01.193) 0:29:44.916 ****** 2026-02-03 06:24:46.105406 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105411 | orchestrator | 2026-02-03 06:24:46.105418 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:24:46.105422 | orchestrator | Tuesday 03 February 2026 06:24:33 +0000 (0:00:01.297) 0:29:46.213 ****** 2026-02-03 06:24:46.105426 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105430 | orchestrator | 2026-02-03 06:24:46.105434 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:24:46.105439 | orchestrator | Tuesday 03 February 2026 06:24:34 +0000 (0:00:01.284) 0:29:47.498 ****** 2026-02-03 06:24:46.105445 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105451 | orchestrator | 2026-02-03 06:24:46.105457 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:24:46.105463 | orchestrator | Tuesday 03 February 2026 06:24:35 +0000 (0:00:01.271) 0:29:48.769 ****** 2026-02-03 06:24:46.105469 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105475 | orchestrator | 2026-02-03 06:24:46.105481 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:24:46.105491 | orchestrator | Tuesday 03 February 2026 06:24:36 +0000 (0:00:01.208) 0:29:49.978 ****** 2026-02-03 06:24:46.105504 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105510 | orchestrator | 2026-02-03 06:24:46.105517 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:24:46.105523 | orchestrator | Tuesday 03 February 2026 06:24:37 +0000 (0:00:00.881) 0:29:50.859 ****** 2026-02-03 06:24:46.105539 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:46.105545 | orchestrator | 2026-02-03 06:24:46.105552 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:24:46.105557 | orchestrator | Tuesday 03 February 2026 06:24:39 +0000 (0:00:02.278) 0:29:53.138 ****** 2026-02-03 06:24:46.105563 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:24:46.105569 | orchestrator | 2026-02-03 06:24:46.105574 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:24:46.105580 | orchestrator | Tuesday 03 February 2026 06:24:40 +0000 (0:00:00.825) 0:29:53.963 ****** 2026-02-03 06:24:46.105586 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-03 06:24:46.105592 | orchestrator | 2026-02-03 06:24:46.105598 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:24:46.105604 | orchestrator | Tuesday 03 February 2026 06:24:42 +0000 (0:00:01.303) 0:29:55.267 ****** 2026-02-03 06:24:46.105610 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105616 | orchestrator | 2026-02-03 06:24:46.105622 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:24:46.105628 | orchestrator | Tuesday 03 February 2026 06:24:43 +0000 (0:00:01.226) 0:29:56.493 ****** 2026-02-03 06:24:46.105634 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105640 | orchestrator | 2026-02-03 06:24:46.105647 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:24:46.105653 | orchestrator | Tuesday 03 February 2026 06:24:44 +0000 (0:00:01.396) 0:29:57.889 ****** 2026-02-03 06:24:46.105659 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:24:46.105665 | orchestrator | 2026-02-03 06:24:46.105682 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:25:22.492930 | orchestrator | Tuesday 03 February 2026 06:24:46 +0000 (0:00:01.390) 0:29:59.279 ****** 2026-02-03 06:25:22.493134 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.493159 | orchestrator | 2026-02-03 06:25:22.493178 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:25:22.493194 | orchestrator | Tuesday 03 February 2026 06:24:47 +0000 (0:00:01.270) 0:30:00.549 ****** 2026-02-03 06:25:22.493210 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.493227 | orchestrator | 2026-02-03 06:25:22.493242 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:25:22.493258 | orchestrator | Tuesday 03 February 2026 06:24:48 +0000 (0:00:01.187) 0:30:01.736 ****** 2026-02-03 06:25:22.493275 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.493291 | orchestrator | 2026-02-03 06:25:22.493307 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:25:22.493323 | orchestrator | Tuesday 03 February 2026 06:24:49 +0000 (0:00:01.223) 0:30:02.961 ****** 2026-02-03 06:25:22.493339 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.493356 | orchestrator | 2026-02-03 06:25:22.493373 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:25:22.493390 | orchestrator | Tuesday 03 February 2026 06:24:51 +0000 (0:00:01.239) 0:30:04.201 ****** 2026-02-03 06:25:22.493407 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.493423 | orchestrator | 2026-02-03 06:25:22.493439 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:25:22.493455 | orchestrator | Tuesday 03 February 2026 06:24:52 +0000 (0:00:01.169) 0:30:05.370 ****** 2026-02-03 06:25:22.493472 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:25:22.493491 | orchestrator | 2026-02-03 06:25:22.493507 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:25:22.493525 | orchestrator | Tuesday 03 February 2026 06:24:53 +0000 (0:00:00.935) 0:30:06.306 ****** 2026-02-03 06:25:22.493542 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-03 06:25:22.493560 | orchestrator | 2026-02-03 06:25:22.493578 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:25:22.493642 | orchestrator | Tuesday 03 February 2026 06:24:54 +0000 (0:00:01.187) 0:30:07.494 ****** 2026-02-03 06:25:22.493662 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-03 06:25:22.493676 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-03 06:25:22.493687 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-03 06:25:22.493698 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-03 06:25:22.493709 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-03 06:25:22.493721 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-03 06:25:22.493732 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-03 06:25:22.493743 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:25:22.493754 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:25:22.493765 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:25:22.493777 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:25:22.493788 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:25:22.493797 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:25:22.493807 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:25:22.493816 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-03 06:25:22.493826 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-03 06:25:22.493835 | orchestrator | 2026-02-03 06:25:22.493845 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:25:22.493856 | orchestrator | Tuesday 03 February 2026 06:25:00 +0000 (0:00:06.685) 0:30:14.179 ****** 2026-02-03 06:25:22.493879 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.493890 | orchestrator | 2026-02-03 06:25:22.493899 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:25:22.493909 | orchestrator | Tuesday 03 February 2026 06:25:01 +0000 (0:00:00.836) 0:30:15.016 ****** 2026-02-03 06:25:22.493918 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.493928 | orchestrator | 2026-02-03 06:25:22.493937 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:25:22.493991 | orchestrator | Tuesday 03 February 2026 06:25:02 +0000 (0:00:00.819) 0:30:15.836 ****** 2026-02-03 06:25:22.494001 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494011 | orchestrator | 2026-02-03 06:25:22.494078 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:25:22.494088 | orchestrator | Tuesday 03 February 2026 06:25:03 +0000 (0:00:00.808) 0:30:16.645 ****** 2026-02-03 06:25:22.494098 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494108 | orchestrator | 2026-02-03 06:25:22.494117 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:25:22.494127 | orchestrator | Tuesday 03 February 2026 06:25:04 +0000 (0:00:00.825) 0:30:17.470 ****** 2026-02-03 06:25:22.494137 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494146 | orchestrator | 2026-02-03 06:25:22.494156 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:25:22.494166 | orchestrator | Tuesday 03 February 2026 06:25:05 +0000 (0:00:00.826) 0:30:18.297 ****** 2026-02-03 06:25:22.494175 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494185 | orchestrator | 2026-02-03 06:25:22.494195 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:25:22.494205 | orchestrator | Tuesday 03 February 2026 06:25:05 +0000 (0:00:00.822) 0:30:19.120 ****** 2026-02-03 06:25:22.494214 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494224 | orchestrator | 2026-02-03 06:25:22.494257 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:25:22.494268 | orchestrator | Tuesday 03 February 2026 06:25:06 +0000 (0:00:00.879) 0:30:20.000 ****** 2026-02-03 06:25:22.494288 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494298 | orchestrator | 2026-02-03 06:25:22.494308 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:25:22.494318 | orchestrator | Tuesday 03 February 2026 06:25:07 +0000 (0:00:00.856) 0:30:20.856 ****** 2026-02-03 06:25:22.494328 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494338 | orchestrator | 2026-02-03 06:25:22.494347 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:25:22.494357 | orchestrator | Tuesday 03 February 2026 06:25:08 +0000 (0:00:00.864) 0:30:21.721 ****** 2026-02-03 06:25:22.494366 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494376 | orchestrator | 2026-02-03 06:25:22.494386 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:25:22.494395 | orchestrator | Tuesday 03 February 2026 06:25:09 +0000 (0:00:00.820) 0:30:22.541 ****** 2026-02-03 06:25:22.494405 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494415 | orchestrator | 2026-02-03 06:25:22.494424 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:25:22.494434 | orchestrator | Tuesday 03 February 2026 06:25:10 +0000 (0:00:00.867) 0:30:23.409 ****** 2026-02-03 06:25:22.494444 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494454 | orchestrator | 2026-02-03 06:25:22.494463 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:25:22.494473 | orchestrator | Tuesday 03 February 2026 06:25:11 +0000 (0:00:00.800) 0:30:24.210 ****** 2026-02-03 06:25:22.494482 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494492 | orchestrator | 2026-02-03 06:25:22.494502 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:25:22.494512 | orchestrator | Tuesday 03 February 2026 06:25:12 +0000 (0:00:00.995) 0:30:25.205 ****** 2026-02-03 06:25:22.494521 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494531 | orchestrator | 2026-02-03 06:25:22.494541 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:25:22.494550 | orchestrator | Tuesday 03 February 2026 06:25:12 +0000 (0:00:00.794) 0:30:26.000 ****** 2026-02-03 06:25:22.494560 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494570 | orchestrator | 2026-02-03 06:25:22.494579 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:25:22.494589 | orchestrator | Tuesday 03 February 2026 06:25:13 +0000 (0:00:00.934) 0:30:26.935 ****** 2026-02-03 06:25:22.494599 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494609 | orchestrator | 2026-02-03 06:25:22.494618 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:25:22.494628 | orchestrator | Tuesday 03 February 2026 06:25:14 +0000 (0:00:00.851) 0:30:27.786 ****** 2026-02-03 06:25:22.494637 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494647 | orchestrator | 2026-02-03 06:25:22.494657 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:25:22.494668 | orchestrator | Tuesday 03 February 2026 06:25:15 +0000 (0:00:00.842) 0:30:28.629 ****** 2026-02-03 06:25:22.494678 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494687 | orchestrator | 2026-02-03 06:25:22.494697 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:25:22.494707 | orchestrator | Tuesday 03 February 2026 06:25:16 +0000 (0:00:00.825) 0:30:29.455 ****** 2026-02-03 06:25:22.494716 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494726 | orchestrator | 2026-02-03 06:25:22.494735 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:25:22.494745 | orchestrator | Tuesday 03 February 2026 06:25:17 +0000 (0:00:00.883) 0:30:30.338 ****** 2026-02-03 06:25:22.494755 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494764 | orchestrator | 2026-02-03 06:25:22.494774 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:25:22.494796 | orchestrator | Tuesday 03 February 2026 06:25:18 +0000 (0:00:00.892) 0:30:31.230 ****** 2026-02-03 06:25:22.494806 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494815 | orchestrator | 2026-02-03 06:25:22.494825 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:25:22.494835 | orchestrator | Tuesday 03 February 2026 06:25:18 +0000 (0:00:00.886) 0:30:32.116 ****** 2026-02-03 06:25:22.494844 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 06:25:22.494854 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 06:25:22.494864 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 06:25:22.494874 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494884 | orchestrator | 2026-02-03 06:25:22.494893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:25:22.494903 | orchestrator | Tuesday 03 February 2026 06:25:20 +0000 (0:00:01.205) 0:30:33.322 ****** 2026-02-03 06:25:22.494912 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 06:25:22.494922 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 06:25:22.494932 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 06:25:22.494964 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.494975 | orchestrator | 2026-02-03 06:25:22.494985 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:25:22.494994 | orchestrator | Tuesday 03 February 2026 06:25:21 +0000 (0:00:01.200) 0:30:34.522 ****** 2026-02-03 06:25:22.495004 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-03 06:25:22.495018 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-03 06:25:22.495034 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-03 06:25:22.495051 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:25:22.495068 | orchestrator | 2026-02-03 06:25:22.495093 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:26:27.241402 | orchestrator | Tuesday 03 February 2026 06:25:22 +0000 (0:00:01.138) 0:30:35.661 ****** 2026-02-03 06:26:27.241501 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:26:27.241514 | orchestrator | 2026-02-03 06:26:27.241524 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:26:27.241532 | orchestrator | Tuesday 03 February 2026 06:25:23 +0000 (0:00:00.873) 0:30:36.535 ****** 2026-02-03 06:26:27.241542 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-03 06:26:27.241550 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:26:27.241558 | orchestrator | 2026-02-03 06:26:27.241566 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:26:27.241574 | orchestrator | Tuesday 03 February 2026 06:25:24 +0000 (0:00:00.954) 0:30:37.490 ****** 2026-02-03 06:26:27.241582 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:26:27.241590 | orchestrator | 2026-02-03 06:26:27.241598 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:26:27.241606 | orchestrator | Tuesday 03 February 2026 06:25:25 +0000 (0:00:01.551) 0:30:39.041 ****** 2026-02-03 06:26:27.241614 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:26:27.241623 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-03 06:26:27.241632 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:26:27.241639 | orchestrator | 2026-02-03 06:26:27.241647 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-03 06:26:27.241655 | orchestrator | Tuesday 03 February 2026 06:25:27 +0000 (0:00:01.801) 0:30:40.843 ****** 2026-02-03 06:26:27.241663 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-03 06:26:27.241671 | orchestrator | 2026-02-03 06:26:27.241679 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-03 06:26:27.241705 | orchestrator | Tuesday 03 February 2026 06:25:28 +0000 (0:00:01.174) 0:30:42.018 ****** 2026-02-03 06:26:27.241714 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:26:27.241722 | orchestrator | 2026-02-03 06:26:27.241733 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-03 06:26:27.241746 | orchestrator | Tuesday 03 February 2026 06:25:30 +0000 (0:00:01.621) 0:30:43.640 ****** 2026-02-03 06:26:27.241760 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:26:27.241772 | orchestrator | 2026-02-03 06:26:27.241785 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-03 06:26:27.241800 | orchestrator | Tuesday 03 February 2026 06:25:31 +0000 (0:00:01.185) 0:30:44.825 ****** 2026-02-03 06:26:27.241814 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:26:27.241825 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:26:27.241832 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:26:27.241840 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-03 06:26:27.241848 | orchestrator | 2026-02-03 06:26:27.241856 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-03 06:26:27.241864 | orchestrator | Tuesday 03 February 2026 06:25:39 +0000 (0:00:07.469) 0:30:52.295 ****** 2026-02-03 06:26:27.241871 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:26:27.241879 | orchestrator | 2026-02-03 06:26:27.241887 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-03 06:26:27.241895 | orchestrator | Tuesday 03 February 2026 06:25:40 +0000 (0:00:01.244) 0:30:53.540 ****** 2026-02-03 06:26:27.241903 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-03 06:26:27.241911 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-03 06:26:27.241942 | orchestrator | 2026-02-03 06:26:27.241951 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-03 06:26:27.241961 | orchestrator | Tuesday 03 February 2026 06:25:43 +0000 (0:00:03.598) 0:30:57.139 ****** 2026-02-03 06:26:27.241984 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-03 06:26:27.241994 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-03 06:26:27.242003 | orchestrator | 2026-02-03 06:26:27.242012 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-03 06:26:27.242065 | orchestrator | Tuesday 03 February 2026 06:25:46 +0000 (0:00:02.319) 0:30:59.459 ****** 2026-02-03 06:26:27.242075 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:26:27.242084 | orchestrator | 2026-02-03 06:26:27.242094 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-03 06:26:27.242103 | orchestrator | Tuesday 03 February 2026 06:25:47 +0000 (0:00:01.569) 0:31:01.032 ****** 2026-02-03 06:26:27.242112 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:26:27.242122 | orchestrator | 2026-02-03 06:26:27.242131 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-03 06:26:27.242140 | orchestrator | Tuesday 03 February 2026 06:25:48 +0000 (0:00:00.829) 0:31:01.862 ****** 2026-02-03 06:26:27.242149 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:26:27.242166 | orchestrator | 2026-02-03 06:26:27.242176 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-03 06:26:27.242185 | orchestrator | Tuesday 03 February 2026 06:25:49 +0000 (0:00:00.804) 0:31:02.666 ****** 2026-02-03 06:26:27.242194 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-03 06:26:27.242204 | orchestrator | 2026-02-03 06:26:27.242212 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-03 06:26:27.242223 | orchestrator | Tuesday 03 February 2026 06:25:50 +0000 (0:00:01.212) 0:31:03.878 ****** 2026-02-03 06:26:27.242232 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:26:27.242242 | orchestrator | 2026-02-03 06:26:27.242251 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-03 06:26:27.242269 | orchestrator | Tuesday 03 February 2026 06:25:51 +0000 (0:00:01.236) 0:31:05.115 ****** 2026-02-03 06:26:27.242279 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:26:27.242288 | orchestrator | 2026-02-03 06:26:27.242312 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-03 06:26:27.242320 | orchestrator | Tuesday 03 February 2026 06:25:53 +0000 (0:00:01.231) 0:31:06.346 ****** 2026-02-03 06:26:27.242328 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-03 06:26:27.242336 | orchestrator | 2026-02-03 06:26:27.242344 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-03 06:26:27.242352 | orchestrator | Tuesday 03 February 2026 06:25:54 +0000 (0:00:01.390) 0:31:07.737 ****** 2026-02-03 06:26:27.242359 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:26:27.242367 | orchestrator | 2026-02-03 06:26:27.242375 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-03 06:26:27.242383 | orchestrator | Tuesday 03 February 2026 06:25:56 +0000 (0:00:02.168) 0:31:09.905 ****** 2026-02-03 06:26:27.242391 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:26:27.242399 | orchestrator | 2026-02-03 06:26:27.242407 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-03 06:26:27.242415 | orchestrator | Tuesday 03 February 2026 06:25:58 +0000 (0:00:01.988) 0:31:11.893 ****** 2026-02-03 06:26:27.242422 | orchestrator | ok: [testbed-node-1] 2026-02-03 06:26:27.242430 | orchestrator | 2026-02-03 06:26:27.242438 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-03 06:26:27.242446 | orchestrator | Tuesday 03 February 2026 06:26:01 +0000 (0:00:02.597) 0:31:14.491 ****** 2026-02-03 06:26:27.242454 | orchestrator | changed: [testbed-node-1] 2026-02-03 06:26:27.242462 | orchestrator | 2026-02-03 06:26:27.242470 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-03 06:26:27.242478 | orchestrator | Tuesday 03 February 2026 06:26:05 +0000 (0:00:03.914) 0:31:18.406 ****** 2026-02-03 06:26:27.242486 | orchestrator | skipping: [testbed-node-1] 2026-02-03 06:26:27.242494 | orchestrator | 2026-02-03 06:26:27.242502 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-03 06:26:27.242509 | orchestrator | 2026-02-03 06:26:27.242517 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-03 06:26:27.242525 | orchestrator | Tuesday 03 February 2026 06:26:06 +0000 (0:00:01.388) 0:31:19.795 ****** 2026-02-03 06:26:27.242533 | orchestrator | changed: [testbed-node-2] 2026-02-03 06:26:27.242541 | orchestrator | 2026-02-03 06:26:27.242549 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-03 06:26:27.242557 | orchestrator | Tuesday 03 February 2026 06:26:09 +0000 (0:00:02.667) 0:31:22.463 ****** 2026-02-03 06:26:27.242565 | orchestrator | changed: [testbed-node-2] 2026-02-03 06:26:27.242572 | orchestrator | 2026-02-03 06:26:27.242580 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:26:27.242588 | orchestrator | Tuesday 03 February 2026 06:26:11 +0000 (0:00:02.279) 0:31:24.743 ****** 2026-02-03 06:26:27.242596 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-03 06:26:27.242604 | orchestrator | 2026-02-03 06:26:27.242612 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:26:27.242620 | orchestrator | Tuesday 03 February 2026 06:26:12 +0000 (0:00:01.221) 0:31:25.965 ****** 2026-02-03 06:26:27.242628 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:27.242635 | orchestrator | 2026-02-03 06:26:27.242644 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:26:27.242652 | orchestrator | Tuesday 03 February 2026 06:26:14 +0000 (0:00:01.589) 0:31:27.554 ****** 2026-02-03 06:26:27.242659 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:27.242667 | orchestrator | 2026-02-03 06:26:27.242675 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:26:27.242683 | orchestrator | Tuesday 03 February 2026 06:26:15 +0000 (0:00:01.224) 0:31:28.779 ****** 2026-02-03 06:26:27.242696 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:27.242704 | orchestrator | 2026-02-03 06:26:27.242712 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:26:27.242720 | orchestrator | Tuesday 03 February 2026 06:26:17 +0000 (0:00:02.028) 0:31:30.808 ****** 2026-02-03 06:26:27.242728 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:27.242735 | orchestrator | 2026-02-03 06:26:27.242748 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:26:27.242757 | orchestrator | Tuesday 03 February 2026 06:26:18 +0000 (0:00:01.185) 0:31:31.993 ****** 2026-02-03 06:26:27.242765 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:27.242772 | orchestrator | 2026-02-03 06:26:27.242783 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:26:27.242797 | orchestrator | Tuesday 03 February 2026 06:26:20 +0000 (0:00:01.218) 0:31:33.212 ****** 2026-02-03 06:26:27.242811 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:27.242825 | orchestrator | 2026-02-03 06:26:27.242838 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:26:27.242852 | orchestrator | Tuesday 03 February 2026 06:26:21 +0000 (0:00:01.281) 0:31:34.494 ****** 2026-02-03 06:26:27.242865 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:27.242877 | orchestrator | 2026-02-03 06:26:27.242885 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:26:27.242893 | orchestrator | Tuesday 03 February 2026 06:26:22 +0000 (0:00:01.214) 0:31:35.709 ****** 2026-02-03 06:26:27.242900 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:27.242908 | orchestrator | 2026-02-03 06:26:27.242935 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:26:27.242945 | orchestrator | Tuesday 03 February 2026 06:26:23 +0000 (0:00:01.155) 0:31:36.864 ****** 2026-02-03 06:26:27.242953 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:26:27.242961 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:26:27.242969 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:26:27.242976 | orchestrator | 2026-02-03 06:26:27.242984 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:26:27.242992 | orchestrator | Tuesday 03 February 2026 06:26:25 +0000 (0:00:02.241) 0:31:39.106 ****** 2026-02-03 06:26:27.243006 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:53.180364 | orchestrator | 2026-02-03 06:26:53.180513 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:26:53.180542 | orchestrator | Tuesday 03 February 2026 06:26:27 +0000 (0:00:01.311) 0:31:40.417 ****** 2026-02-03 06:26:53.180561 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:26:53.180580 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:26:53.180598 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:26:53.180618 | orchestrator | 2026-02-03 06:26:53.180636 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:26:53.180656 | orchestrator | Tuesday 03 February 2026 06:26:30 +0000 (0:00:03.503) 0:31:43.921 ****** 2026-02-03 06:26:53.180675 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 06:26:53.180691 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 06:26:53.180702 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 06:26:53.180713 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.180725 | orchestrator | 2026-02-03 06:26:53.180736 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:26:53.180747 | orchestrator | Tuesday 03 February 2026 06:26:32 +0000 (0:00:01.937) 0:31:45.859 ****** 2026-02-03 06:26:53.180760 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:26:53.180801 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:26:53.180813 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:26:53.180824 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.180835 | orchestrator | 2026-02-03 06:26:53.180846 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:26:53.180857 | orchestrator | Tuesday 03 February 2026 06:26:34 +0000 (0:00:02.073) 0:31:47.932 ****** 2026-02-03 06:26:53.180871 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:26:53.180886 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:26:53.180953 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:26:53.180969 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.180980 | orchestrator | 2026-02-03 06:26:53.180992 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:26:53.181002 | orchestrator | Tuesday 03 February 2026 06:26:36 +0000 (0:00:01.446) 0:31:49.379 ****** 2026-02-03 06:26:53.181038 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:26:27.794911', 'end': '2026-02-03 06:26:27.865154', 'delta': '0:00:00.070243', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:26:53.181054 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:26:28.853427', 'end': '2026-02-03 06:26:28.904794', 'delta': '0:00:00.051367', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:26:53.181079 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:26:29.509283', 'end': '2026-02-03 06:26:29.549500', 'delta': '0:00:00.040217', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:26:53.181092 | orchestrator | 2026-02-03 06:26:53.181105 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:26:53.181118 | orchestrator | Tuesday 03 February 2026 06:26:37 +0000 (0:00:01.281) 0:31:50.661 ****** 2026-02-03 06:26:53.181131 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:53.181144 | orchestrator | 2026-02-03 06:26:53.181156 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:26:53.181169 | orchestrator | Tuesday 03 February 2026 06:26:38 +0000 (0:00:01.353) 0:31:52.015 ****** 2026-02-03 06:26:53.181182 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.181195 | orchestrator | 2026-02-03 06:26:53.181208 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:26:53.181219 | orchestrator | Tuesday 03 February 2026 06:26:40 +0000 (0:00:01.318) 0:31:53.333 ****** 2026-02-03 06:26:53.181230 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:53.181241 | orchestrator | 2026-02-03 06:26:53.181253 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:26:53.181264 | orchestrator | Tuesday 03 February 2026 06:26:41 +0000 (0:00:01.184) 0:31:54.518 ****** 2026-02-03 06:26:53.181275 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:26:53.181286 | orchestrator | 2026-02-03 06:26:53.181297 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:26:53.181308 | orchestrator | Tuesday 03 February 2026 06:26:43 +0000 (0:00:02.067) 0:31:56.586 ****** 2026-02-03 06:26:53.181319 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:26:53.181330 | orchestrator | 2026-02-03 06:26:53.181341 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:26:53.181351 | orchestrator | Tuesday 03 February 2026 06:26:44 +0000 (0:00:01.227) 0:31:57.814 ****** 2026-02-03 06:26:53.181368 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.181379 | orchestrator | 2026-02-03 06:26:53.181390 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:26:53.181401 | orchestrator | Tuesday 03 February 2026 06:26:45 +0000 (0:00:01.162) 0:31:58.976 ****** 2026-02-03 06:26:53.181411 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.181422 | orchestrator | 2026-02-03 06:26:53.181433 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:26:53.181444 | orchestrator | Tuesday 03 February 2026 06:26:47 +0000 (0:00:01.278) 0:32:00.255 ****** 2026-02-03 06:26:53.181455 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.181466 | orchestrator | 2026-02-03 06:26:53.181477 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:26:53.181487 | orchestrator | Tuesday 03 February 2026 06:26:48 +0000 (0:00:01.156) 0:32:01.411 ****** 2026-02-03 06:26:53.181504 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.181522 | orchestrator | 2026-02-03 06:26:53.181539 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:26:53.181556 | orchestrator | Tuesday 03 February 2026 06:26:49 +0000 (0:00:01.207) 0:32:02.619 ****** 2026-02-03 06:26:53.181573 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.181602 | orchestrator | 2026-02-03 06:26:53.181623 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:26:53.181641 | orchestrator | Tuesday 03 February 2026 06:26:50 +0000 (0:00:01.232) 0:32:03.851 ****** 2026-02-03 06:26:53.181660 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.181672 | orchestrator | 2026-02-03 06:26:53.181683 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:26:53.181694 | orchestrator | Tuesday 03 February 2026 06:26:51 +0000 (0:00:01.216) 0:32:05.068 ****** 2026-02-03 06:26:53.181704 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:53.181715 | orchestrator | 2026-02-03 06:26:53.181726 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:26:53.181746 | orchestrator | Tuesday 03 February 2026 06:26:53 +0000 (0:00:01.283) 0:32:06.351 ****** 2026-02-03 06:26:58.248045 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:58.248154 | orchestrator | 2026-02-03 06:26:58.248169 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:26:58.248179 | orchestrator | Tuesday 03 February 2026 06:26:54 +0000 (0:00:01.160) 0:32:07.512 ****** 2026-02-03 06:26:58.248200 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:58.248244 | orchestrator | 2026-02-03 06:26:58.248255 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:26:58.248265 | orchestrator | Tuesday 03 February 2026 06:26:55 +0000 (0:00:01.225) 0:32:08.738 ****** 2026-02-03 06:26:58.248277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:26:58.248290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:26:58.248299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:26:58.248311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:26:58.248323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:26:58.248347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:26:58.248378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:26:58.248410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5699a710', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:26:58.248422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:26:58.248431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:26:58.248440 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:26:58.248449 | orchestrator | 2026-02-03 06:26:58.248458 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:26:58.248467 | orchestrator | Tuesday 03 February 2026 06:26:56 +0000 (0:00:01.292) 0:32:10.030 ****** 2026-02-03 06:26:58.248489 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:26:58.248500 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:26:58.248516 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:27:09.940612 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:27:09.940733 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:27:09.940751 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:27:09.940807 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:27:09.940845 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5699a710', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_5699a710-abd3-43e6-8d32-dcb1e0ac0cbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:27:09.940860 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:27:09.940872 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:27:09.940892 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:09.941098 | orchestrator | 2026-02-03 06:27:09.941128 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:27:09.941149 | orchestrator | Tuesday 03 February 2026 06:26:58 +0000 (0:00:01.398) 0:32:11.429 ****** 2026-02-03 06:27:09.941186 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:09.941211 | orchestrator | 2026-02-03 06:27:09.941230 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:27:09.941247 | orchestrator | Tuesday 03 February 2026 06:26:59 +0000 (0:00:01.528) 0:32:12.957 ****** 2026-02-03 06:27:09.941265 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:09.941284 | orchestrator | 2026-02-03 06:27:09.941302 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:27:09.941323 | orchestrator | Tuesday 03 February 2026 06:27:01 +0000 (0:00:01.282) 0:32:14.240 ****** 2026-02-03 06:27:09.941342 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:09.941363 | orchestrator | 2026-02-03 06:27:09.941378 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:27:09.941392 | orchestrator | Tuesday 03 February 2026 06:27:02 +0000 (0:00:01.593) 0:32:15.833 ****** 2026-02-03 06:27:09.941404 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:09.941416 | orchestrator | 2026-02-03 06:27:09.941429 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:27:09.941442 | orchestrator | Tuesday 03 February 2026 06:27:03 +0000 (0:00:01.225) 0:32:17.059 ****** 2026-02-03 06:27:09.941454 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:09.941468 | orchestrator | 2026-02-03 06:27:09.941480 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:27:09.941492 | orchestrator | Tuesday 03 February 2026 06:27:05 +0000 (0:00:01.327) 0:32:18.386 ****** 2026-02-03 06:27:09.941502 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:09.941513 | orchestrator | 2026-02-03 06:27:09.941524 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:27:09.941534 | orchestrator | Tuesday 03 February 2026 06:27:06 +0000 (0:00:01.307) 0:32:19.694 ****** 2026-02-03 06:27:09.941545 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-03 06:27:09.941557 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-03 06:27:09.941567 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:27:09.941578 | orchestrator | 2026-02-03 06:27:09.941592 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:27:09.941611 | orchestrator | Tuesday 03 February 2026 06:27:08 +0000 (0:00:02.199) 0:32:21.893 ****** 2026-02-03 06:27:09.941629 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-03 06:27:09.941648 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-03 06:27:09.941668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-03 06:27:09.941687 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:09.941705 | orchestrator | 2026-02-03 06:27:09.941743 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:27:48.910639 | orchestrator | Tuesday 03 February 2026 06:27:09 +0000 (0:00:01.218) 0:32:23.111 ****** 2026-02-03 06:27:48.910759 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.910777 | orchestrator | 2026-02-03 06:27:48.910789 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:27:48.910801 | orchestrator | Tuesday 03 February 2026 06:27:11 +0000 (0:00:01.191) 0:32:24.303 ****** 2026-02-03 06:27:48.910814 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:27:48.910826 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:27:48.910869 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:27:48.910881 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:27:48.910954 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:27:48.910967 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:27:48.910978 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:27:48.910989 | orchestrator | 2026-02-03 06:27:48.911000 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:27:48.911011 | orchestrator | Tuesday 03 February 2026 06:27:13 +0000 (0:00:02.433) 0:32:26.736 ****** 2026-02-03 06:27:48.911022 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:27:48.911032 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:27:48.911043 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:27:48.911054 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:27:48.911065 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:27:48.911076 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:27:48.911086 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:27:48.911097 | orchestrator | 2026-02-03 06:27:48.911108 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:27:48.911120 | orchestrator | Tuesday 03 February 2026 06:27:16 +0000 (0:00:02.512) 0:32:29.249 ****** 2026-02-03 06:27:48.911139 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-03 06:27:48.911158 | orchestrator | 2026-02-03 06:27:48.911175 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:27:48.911194 | orchestrator | Tuesday 03 February 2026 06:27:17 +0000 (0:00:01.159) 0:32:30.408 ****** 2026-02-03 06:27:48.911214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-03 06:27:48.911234 | orchestrator | 2026-02-03 06:27:48.911253 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:27:48.911288 | orchestrator | Tuesday 03 February 2026 06:27:18 +0000 (0:00:01.195) 0:32:31.603 ****** 2026-02-03 06:27:48.911302 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:48.911316 | orchestrator | 2026-02-03 06:27:48.911328 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:27:48.911341 | orchestrator | Tuesday 03 February 2026 06:27:20 +0000 (0:00:01.621) 0:32:33.225 ****** 2026-02-03 06:27:48.911353 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.911365 | orchestrator | 2026-02-03 06:27:48.911377 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:27:48.911390 | orchestrator | Tuesday 03 February 2026 06:27:21 +0000 (0:00:01.236) 0:32:34.462 ****** 2026-02-03 06:27:48.911402 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.911415 | orchestrator | 2026-02-03 06:27:48.911428 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:27:48.911440 | orchestrator | Tuesday 03 February 2026 06:27:22 +0000 (0:00:01.200) 0:32:35.662 ****** 2026-02-03 06:27:48.911453 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.911466 | orchestrator | 2026-02-03 06:27:48.911478 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:27:48.911491 | orchestrator | Tuesday 03 February 2026 06:27:23 +0000 (0:00:01.193) 0:32:36.856 ****** 2026-02-03 06:27:48.911503 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:48.911516 | orchestrator | 2026-02-03 06:27:48.911528 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:27:48.911549 | orchestrator | Tuesday 03 February 2026 06:27:25 +0000 (0:00:01.650) 0:32:38.506 ****** 2026-02-03 06:27:48.911560 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.911571 | orchestrator | 2026-02-03 06:27:48.911582 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:27:48.911593 | orchestrator | Tuesday 03 February 2026 06:27:26 +0000 (0:00:01.203) 0:32:39.710 ****** 2026-02-03 06:27:48.911604 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.911615 | orchestrator | 2026-02-03 06:27:48.911625 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:27:48.911636 | orchestrator | Tuesday 03 February 2026 06:27:27 +0000 (0:00:01.253) 0:32:40.964 ****** 2026-02-03 06:27:48.911646 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:48.911657 | orchestrator | 2026-02-03 06:27:48.911668 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:27:48.911679 | orchestrator | Tuesday 03 February 2026 06:27:29 +0000 (0:00:01.726) 0:32:42.691 ****** 2026-02-03 06:27:48.911689 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:48.911700 | orchestrator | 2026-02-03 06:27:48.911711 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:27:48.911741 | orchestrator | Tuesday 03 February 2026 06:27:31 +0000 (0:00:01.722) 0:32:44.414 ****** 2026-02-03 06:27:48.911753 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.911764 | orchestrator | 2026-02-03 06:27:48.911774 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:27:48.911785 | orchestrator | Tuesday 03 February 2026 06:27:32 +0000 (0:00:00.849) 0:32:45.264 ****** 2026-02-03 06:27:48.911796 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:48.911807 | orchestrator | 2026-02-03 06:27:48.911818 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:27:48.911828 | orchestrator | Tuesday 03 February 2026 06:27:32 +0000 (0:00:00.863) 0:32:46.128 ****** 2026-02-03 06:27:48.911839 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.911850 | orchestrator | 2026-02-03 06:27:48.911861 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:27:48.911871 | orchestrator | Tuesday 03 February 2026 06:27:33 +0000 (0:00:00.812) 0:32:46.940 ****** 2026-02-03 06:27:48.911882 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.911932 | orchestrator | 2026-02-03 06:27:48.911943 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:27:48.911954 | orchestrator | Tuesday 03 February 2026 06:27:34 +0000 (0:00:00.816) 0:32:47.757 ****** 2026-02-03 06:27:48.911964 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.911975 | orchestrator | 2026-02-03 06:27:48.911986 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:27:48.911997 | orchestrator | Tuesday 03 February 2026 06:27:35 +0000 (0:00:00.848) 0:32:48.606 ****** 2026-02-03 06:27:48.912008 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912019 | orchestrator | 2026-02-03 06:27:48.912029 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:27:48.912040 | orchestrator | Tuesday 03 February 2026 06:27:36 +0000 (0:00:00.841) 0:32:49.447 ****** 2026-02-03 06:27:48.912051 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912062 | orchestrator | 2026-02-03 06:27:48.912072 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:27:48.912083 | orchestrator | Tuesday 03 February 2026 06:27:37 +0000 (0:00:00.833) 0:32:50.280 ****** 2026-02-03 06:27:48.912094 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:48.912105 | orchestrator | 2026-02-03 06:27:48.912115 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:27:48.912126 | orchestrator | Tuesday 03 February 2026 06:27:38 +0000 (0:00:00.948) 0:32:51.229 ****** 2026-02-03 06:27:48.912137 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:48.912148 | orchestrator | 2026-02-03 06:27:48.912158 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:27:48.912181 | orchestrator | Tuesday 03 February 2026 06:27:38 +0000 (0:00:00.794) 0:32:52.024 ****** 2026-02-03 06:27:48.912200 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:27:48.912220 | orchestrator | 2026-02-03 06:27:48.912239 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:27:48.912260 | orchestrator | Tuesday 03 February 2026 06:27:39 +0000 (0:00:01.001) 0:32:53.026 ****** 2026-02-03 06:27:48.912280 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912299 | orchestrator | 2026-02-03 06:27:48.912315 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:27:48.912326 | orchestrator | Tuesday 03 February 2026 06:27:40 +0000 (0:00:00.880) 0:32:53.906 ****** 2026-02-03 06:27:48.912336 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912347 | orchestrator | 2026-02-03 06:27:48.912364 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:27:48.912376 | orchestrator | Tuesday 03 February 2026 06:27:41 +0000 (0:00:00.816) 0:32:54.723 ****** 2026-02-03 06:27:48.912386 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912397 | orchestrator | 2026-02-03 06:27:48.912408 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:27:48.912419 | orchestrator | Tuesday 03 February 2026 06:27:42 +0000 (0:00:00.862) 0:32:55.586 ****** 2026-02-03 06:27:48.912430 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912440 | orchestrator | 2026-02-03 06:27:48.912451 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:27:48.912462 | orchestrator | Tuesday 03 February 2026 06:27:43 +0000 (0:00:00.845) 0:32:56.431 ****** 2026-02-03 06:27:48.912473 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912483 | orchestrator | 2026-02-03 06:27:48.912494 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:27:48.912505 | orchestrator | Tuesday 03 February 2026 06:27:44 +0000 (0:00:00.780) 0:32:57.212 ****** 2026-02-03 06:27:48.912516 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912527 | orchestrator | 2026-02-03 06:27:48.912537 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:27:48.912548 | orchestrator | Tuesday 03 February 2026 06:27:44 +0000 (0:00:00.781) 0:32:57.993 ****** 2026-02-03 06:27:48.912559 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912570 | orchestrator | 2026-02-03 06:27:48.912580 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:27:48.912591 | orchestrator | Tuesday 03 February 2026 06:27:45 +0000 (0:00:00.836) 0:32:58.830 ****** 2026-02-03 06:27:48.912602 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912613 | orchestrator | 2026-02-03 06:27:48.912623 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:27:48.912634 | orchestrator | Tuesday 03 February 2026 06:27:46 +0000 (0:00:00.819) 0:32:59.649 ****** 2026-02-03 06:27:48.912645 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912656 | orchestrator | 2026-02-03 06:27:48.912667 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:27:48.912677 | orchestrator | Tuesday 03 February 2026 06:27:47 +0000 (0:00:00.787) 0:33:00.437 ****** 2026-02-03 06:27:48.912688 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912699 | orchestrator | 2026-02-03 06:27:48.912710 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:27:48.912721 | orchestrator | Tuesday 03 February 2026 06:27:48 +0000 (0:00:00.822) 0:33:01.260 ****** 2026-02-03 06:27:48.912732 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:27:48.912743 | orchestrator | 2026-02-03 06:27:48.912762 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:28:36.925710 | orchestrator | Tuesday 03 February 2026 06:27:48 +0000 (0:00:00.825) 0:33:02.085 ****** 2026-02-03 06:28:36.925827 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.925844 | orchestrator | 2026-02-03 06:28:36.925858 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:28:36.925966 | orchestrator | Tuesday 03 February 2026 06:27:49 +0000 (0:00:00.955) 0:33:03.041 ****** 2026-02-03 06:28:36.925986 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:28:36.926007 | orchestrator | 2026-02-03 06:28:36.926080 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:28:36.926093 | orchestrator | Tuesday 03 February 2026 06:27:51 +0000 (0:00:01.653) 0:33:04.695 ****** 2026-02-03 06:28:36.926104 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:28:36.926115 | orchestrator | 2026-02-03 06:28:36.926126 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:28:36.926137 | orchestrator | Tuesday 03 February 2026 06:27:53 +0000 (0:00:02.188) 0:33:06.883 ****** 2026-02-03 06:28:36.926148 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-03 06:28:36.926160 | orchestrator | 2026-02-03 06:28:36.926171 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:28:36.926182 | orchestrator | Tuesday 03 February 2026 06:27:54 +0000 (0:00:01.190) 0:33:08.074 ****** 2026-02-03 06:28:36.926193 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926204 | orchestrator | 2026-02-03 06:28:36.926215 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:28:36.926225 | orchestrator | Tuesday 03 February 2026 06:27:56 +0000 (0:00:01.252) 0:33:09.327 ****** 2026-02-03 06:28:36.926237 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926250 | orchestrator | 2026-02-03 06:28:36.926263 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:28:36.926276 | orchestrator | Tuesday 03 February 2026 06:27:57 +0000 (0:00:01.270) 0:33:10.598 ****** 2026-02-03 06:28:36.926288 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:28:36.926301 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:28:36.926314 | orchestrator | 2026-02-03 06:28:36.926326 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:28:36.926338 | orchestrator | Tuesday 03 February 2026 06:27:59 +0000 (0:00:01.870) 0:33:12.468 ****** 2026-02-03 06:28:36.926350 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:28:36.926363 | orchestrator | 2026-02-03 06:28:36.926376 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:28:36.926388 | orchestrator | Tuesday 03 February 2026 06:28:00 +0000 (0:00:01.681) 0:33:14.150 ****** 2026-02-03 06:28:36.926400 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926413 | orchestrator | 2026-02-03 06:28:36.926426 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:28:36.926439 | orchestrator | Tuesday 03 February 2026 06:28:02 +0000 (0:00:01.290) 0:33:15.441 ****** 2026-02-03 06:28:36.926451 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926464 | orchestrator | 2026-02-03 06:28:36.926475 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:28:36.926501 | orchestrator | Tuesday 03 February 2026 06:28:03 +0000 (0:00:00.838) 0:33:16.279 ****** 2026-02-03 06:28:36.926513 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926524 | orchestrator | 2026-02-03 06:28:36.926535 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:28:36.926546 | orchestrator | Tuesday 03 February 2026 06:28:03 +0000 (0:00:00.808) 0:33:17.088 ****** 2026-02-03 06:28:36.926556 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-03 06:28:36.926567 | orchestrator | 2026-02-03 06:28:36.926578 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:28:36.926589 | orchestrator | Tuesday 03 February 2026 06:28:05 +0000 (0:00:01.181) 0:33:18.269 ****** 2026-02-03 06:28:36.926600 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:28:36.926611 | orchestrator | 2026-02-03 06:28:36.926622 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:28:36.926644 | orchestrator | Tuesday 03 February 2026 06:28:06 +0000 (0:00:01.865) 0:33:20.135 ****** 2026-02-03 06:28:36.926655 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:28:36.926666 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:28:36.926677 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:28:36.926688 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926699 | orchestrator | 2026-02-03 06:28:36.926709 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:28:36.926720 | orchestrator | Tuesday 03 February 2026 06:28:08 +0000 (0:00:01.199) 0:33:21.335 ****** 2026-02-03 06:28:36.926731 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926742 | orchestrator | 2026-02-03 06:28:36.926753 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:28:36.926763 | orchestrator | Tuesday 03 February 2026 06:28:09 +0000 (0:00:01.214) 0:33:22.549 ****** 2026-02-03 06:28:36.926774 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926785 | orchestrator | 2026-02-03 06:28:36.926796 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:28:36.926806 | orchestrator | Tuesday 03 February 2026 06:28:10 +0000 (0:00:01.300) 0:33:23.849 ****** 2026-02-03 06:28:36.926817 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926828 | orchestrator | 2026-02-03 06:28:36.926839 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:28:36.926849 | orchestrator | Tuesday 03 February 2026 06:28:11 +0000 (0:00:01.201) 0:33:25.051 ****** 2026-02-03 06:28:36.926861 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926891 | orchestrator | 2026-02-03 06:28:36.926924 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:28:36.926936 | orchestrator | Tuesday 03 February 2026 06:28:13 +0000 (0:00:01.223) 0:33:26.274 ****** 2026-02-03 06:28:36.926947 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.926957 | orchestrator | 2026-02-03 06:28:36.926968 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:28:36.926979 | orchestrator | Tuesday 03 February 2026 06:28:13 +0000 (0:00:00.816) 0:33:27.091 ****** 2026-02-03 06:28:36.926990 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:28:36.927001 | orchestrator | 2026-02-03 06:28:36.927012 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:28:36.927022 | orchestrator | Tuesday 03 February 2026 06:28:16 +0000 (0:00:02.309) 0:33:29.401 ****** 2026-02-03 06:28:36.927033 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:28:36.927044 | orchestrator | 2026-02-03 06:28:36.927055 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:28:36.927066 | orchestrator | Tuesday 03 February 2026 06:28:17 +0000 (0:00:00.855) 0:33:30.256 ****** 2026-02-03 06:28:36.927076 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-03 06:28:36.927087 | orchestrator | 2026-02-03 06:28:36.927098 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:28:36.927109 | orchestrator | Tuesday 03 February 2026 06:28:18 +0000 (0:00:01.221) 0:33:31.478 ****** 2026-02-03 06:28:36.927120 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.927130 | orchestrator | 2026-02-03 06:28:36.927141 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:28:36.927152 | orchestrator | Tuesday 03 February 2026 06:28:19 +0000 (0:00:01.232) 0:33:32.711 ****** 2026-02-03 06:28:36.927162 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.927173 | orchestrator | 2026-02-03 06:28:36.927184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:28:36.927195 | orchestrator | Tuesday 03 February 2026 06:28:20 +0000 (0:00:01.211) 0:33:33.922 ****** 2026-02-03 06:28:36.927206 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.927217 | orchestrator | 2026-02-03 06:28:36.927235 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:28:36.927246 | orchestrator | Tuesday 03 February 2026 06:28:21 +0000 (0:00:01.246) 0:33:35.168 ****** 2026-02-03 06:28:36.927257 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.927268 | orchestrator | 2026-02-03 06:28:36.927279 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:28:36.927290 | orchestrator | Tuesday 03 February 2026 06:28:23 +0000 (0:00:01.172) 0:33:36.340 ****** 2026-02-03 06:28:36.927300 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.927311 | orchestrator | 2026-02-03 06:28:36.927322 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:28:36.927333 | orchestrator | Tuesday 03 February 2026 06:28:24 +0000 (0:00:01.187) 0:33:37.528 ****** 2026-02-03 06:28:36.927344 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.927355 | orchestrator | 2026-02-03 06:28:36.927366 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:28:36.927376 | orchestrator | Tuesday 03 February 2026 06:28:25 +0000 (0:00:01.344) 0:33:38.873 ****** 2026-02-03 06:28:36.927387 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.927398 | orchestrator | 2026-02-03 06:28:36.927414 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:28:36.927425 | orchestrator | Tuesday 03 February 2026 06:28:26 +0000 (0:00:01.245) 0:33:40.118 ****** 2026-02-03 06:28:36.927436 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:28:36.927447 | orchestrator | 2026-02-03 06:28:36.927458 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:28:36.927469 | orchestrator | Tuesday 03 February 2026 06:28:28 +0000 (0:00:01.255) 0:33:41.374 ****** 2026-02-03 06:28:36.927480 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:28:36.927490 | orchestrator | 2026-02-03 06:28:36.927501 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:28:36.927512 | orchestrator | Tuesday 03 February 2026 06:28:29 +0000 (0:00:00.836) 0:33:42.210 ****** 2026-02-03 06:28:36.927523 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-03 06:28:36.927534 | orchestrator | 2026-02-03 06:28:36.927544 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:28:36.927555 | orchestrator | Tuesday 03 February 2026 06:28:30 +0000 (0:00:01.188) 0:33:43.399 ****** 2026-02-03 06:28:36.927566 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-03 06:28:36.927577 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-03 06:28:36.927588 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-03 06:28:36.927599 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-03 06:28:36.927610 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-03 06:28:36.927621 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-03 06:28:36.927631 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-03 06:28:36.927642 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:28:36.927653 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:28:36.927664 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:28:36.927675 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:28:36.927686 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:28:36.927697 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:28:36.927708 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:28:36.927718 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-03 06:28:36.927729 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-03 06:28:36.927740 | orchestrator | 2026-02-03 06:28:36.927756 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:29:20.350517 | orchestrator | Tuesday 03 February 2026 06:28:36 +0000 (0:00:06.690) 0:33:50.089 ****** 2026-02-03 06:29:20.350603 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350613 | orchestrator | 2026-02-03 06:29:20.350620 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:29:20.350626 | orchestrator | Tuesday 03 February 2026 06:28:37 +0000 (0:00:00.820) 0:33:50.910 ****** 2026-02-03 06:29:20.350632 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350638 | orchestrator | 2026-02-03 06:29:20.350644 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:29:20.350650 | orchestrator | Tuesday 03 February 2026 06:28:38 +0000 (0:00:00.810) 0:33:51.721 ****** 2026-02-03 06:29:20.350655 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350661 | orchestrator | 2026-02-03 06:29:20.350667 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:29:20.350672 | orchestrator | Tuesday 03 February 2026 06:28:39 +0000 (0:00:00.890) 0:33:52.611 ****** 2026-02-03 06:29:20.350677 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350683 | orchestrator | 2026-02-03 06:29:20.350688 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:29:20.350694 | orchestrator | Tuesday 03 February 2026 06:28:40 +0000 (0:00:00.855) 0:33:53.466 ****** 2026-02-03 06:29:20.350699 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350705 | orchestrator | 2026-02-03 06:29:20.350710 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:29:20.350716 | orchestrator | Tuesday 03 February 2026 06:28:41 +0000 (0:00:00.789) 0:33:54.255 ****** 2026-02-03 06:29:20.350721 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350726 | orchestrator | 2026-02-03 06:29:20.350732 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:29:20.350739 | orchestrator | Tuesday 03 February 2026 06:28:41 +0000 (0:00:00.828) 0:33:55.084 ****** 2026-02-03 06:29:20.350744 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350750 | orchestrator | 2026-02-03 06:29:20.350755 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:29:20.350761 | orchestrator | Tuesday 03 February 2026 06:28:42 +0000 (0:00:00.848) 0:33:55.932 ****** 2026-02-03 06:29:20.350766 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350772 | orchestrator | 2026-02-03 06:29:20.350777 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:29:20.350783 | orchestrator | Tuesday 03 February 2026 06:28:43 +0000 (0:00:00.808) 0:33:56.741 ****** 2026-02-03 06:29:20.350788 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350793 | orchestrator | 2026-02-03 06:29:20.350799 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:29:20.350805 | orchestrator | Tuesday 03 February 2026 06:28:44 +0000 (0:00:00.829) 0:33:57.571 ****** 2026-02-03 06:29:20.350810 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350816 | orchestrator | 2026-02-03 06:29:20.350821 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:29:20.350827 | orchestrator | Tuesday 03 February 2026 06:28:45 +0000 (0:00:00.913) 0:33:58.484 ****** 2026-02-03 06:29:20.350846 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350851 | orchestrator | 2026-02-03 06:29:20.350857 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:29:20.350899 | orchestrator | Tuesday 03 February 2026 06:28:46 +0000 (0:00:00.820) 0:33:59.305 ****** 2026-02-03 06:29:20.350905 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350910 | orchestrator | 2026-02-03 06:29:20.350916 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:29:20.350921 | orchestrator | Tuesday 03 February 2026 06:28:46 +0000 (0:00:00.851) 0:34:00.157 ****** 2026-02-03 06:29:20.350927 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350948 | orchestrator | 2026-02-03 06:29:20.350954 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:29:20.350960 | orchestrator | Tuesday 03 February 2026 06:28:47 +0000 (0:00:00.939) 0:34:01.096 ****** 2026-02-03 06:29:20.350965 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350971 | orchestrator | 2026-02-03 06:29:20.350976 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:29:20.350981 | orchestrator | Tuesday 03 February 2026 06:28:48 +0000 (0:00:00.864) 0:34:01.960 ****** 2026-02-03 06:29:20.350987 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.350992 | orchestrator | 2026-02-03 06:29:20.350997 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:29:20.351003 | orchestrator | Tuesday 03 February 2026 06:28:49 +0000 (0:00:00.967) 0:34:02.928 ****** 2026-02-03 06:29:20.351008 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351014 | orchestrator | 2026-02-03 06:29:20.351019 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:29:20.351025 | orchestrator | Tuesday 03 February 2026 06:28:50 +0000 (0:00:00.993) 0:34:03.921 ****** 2026-02-03 06:29:20.351030 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351035 | orchestrator | 2026-02-03 06:29:20.351041 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:29:20.351048 | orchestrator | Tuesday 03 February 2026 06:28:51 +0000 (0:00:00.821) 0:34:04.742 ****** 2026-02-03 06:29:20.351053 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351059 | orchestrator | 2026-02-03 06:29:20.351064 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:29:20.351069 | orchestrator | Tuesday 03 February 2026 06:28:52 +0000 (0:00:00.878) 0:34:05.621 ****** 2026-02-03 06:29:20.351076 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351082 | orchestrator | 2026-02-03 06:29:20.351088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:29:20.351095 | orchestrator | Tuesday 03 February 2026 06:28:53 +0000 (0:00:00.853) 0:34:06.474 ****** 2026-02-03 06:29:20.351101 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351107 | orchestrator | 2026-02-03 06:29:20.351125 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:29:20.351131 | orchestrator | Tuesday 03 February 2026 06:28:54 +0000 (0:00:00.843) 0:34:07.317 ****** 2026-02-03 06:29:20.351138 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351144 | orchestrator | 2026-02-03 06:29:20.351151 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:29:20.351157 | orchestrator | Tuesday 03 February 2026 06:28:54 +0000 (0:00:00.819) 0:34:08.137 ****** 2026-02-03 06:29:20.351163 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 06:29:20.351170 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 06:29:20.351176 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 06:29:20.351182 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351189 | orchestrator | 2026-02-03 06:29:20.351196 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:29:20.351202 | orchestrator | Tuesday 03 February 2026 06:28:56 +0000 (0:00:01.155) 0:34:09.292 ****** 2026-02-03 06:29:20.351208 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 06:29:20.351214 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 06:29:20.351221 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 06:29:20.351227 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351233 | orchestrator | 2026-02-03 06:29:20.351240 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:29:20.351246 | orchestrator | Tuesday 03 February 2026 06:28:57 +0000 (0:00:01.205) 0:34:10.498 ****** 2026-02-03 06:29:20.351252 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-03 06:29:20.351264 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-03 06:29:20.351270 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-03 06:29:20.351276 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351282 | orchestrator | 2026-02-03 06:29:20.351289 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:29:20.351295 | orchestrator | Tuesday 03 February 2026 06:28:58 +0000 (0:00:01.147) 0:34:11.645 ****** 2026-02-03 06:29:20.351302 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351308 | orchestrator | 2026-02-03 06:29:20.351315 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:29:20.351322 | orchestrator | Tuesday 03 February 2026 06:28:59 +0000 (0:00:00.845) 0:34:12.491 ****** 2026-02-03 06:29:20.351328 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-03 06:29:20.351335 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351342 | orchestrator | 2026-02-03 06:29:20.351348 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:29:20.351354 | orchestrator | Tuesday 03 February 2026 06:29:00 +0000 (0:00:00.962) 0:34:13.453 ****** 2026-02-03 06:29:20.351360 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:29:20.351367 | orchestrator | 2026-02-03 06:29:20.351373 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:29:20.351379 | orchestrator | Tuesday 03 February 2026 06:29:01 +0000 (0:00:01.529) 0:34:14.983 ****** 2026-02-03 06:29:20.351390 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:29:20.351397 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:29:20.351403 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-03 06:29:20.351410 | orchestrator | 2026-02-03 06:29:20.351416 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-03 06:29:20.351423 | orchestrator | Tuesday 03 February 2026 06:29:03 +0000 (0:00:01.960) 0:34:16.943 ****** 2026-02-03 06:29:20.351429 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-03 06:29:20.351436 | orchestrator | 2026-02-03 06:29:20.351442 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-03 06:29:20.351447 | orchestrator | Tuesday 03 February 2026 06:29:05 +0000 (0:00:01.267) 0:34:18.210 ****** 2026-02-03 06:29:20.351453 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:29:20.351458 | orchestrator | 2026-02-03 06:29:20.351464 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-03 06:29:20.351469 | orchestrator | Tuesday 03 February 2026 06:29:06 +0000 (0:00:01.602) 0:34:19.812 ****** 2026-02-03 06:29:20.351474 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:29:20.351480 | orchestrator | 2026-02-03 06:29:20.351485 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-03 06:29:20.351491 | orchestrator | Tuesday 03 February 2026 06:29:07 +0000 (0:00:01.213) 0:34:21.026 ****** 2026-02-03 06:29:20.351496 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:29:20.351502 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:29:20.351507 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:29:20.351512 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-03 06:29:20.351518 | orchestrator | 2026-02-03 06:29:20.351523 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-03 06:29:20.351529 | orchestrator | Tuesday 03 February 2026 06:29:15 +0000 (0:00:07.790) 0:34:28.816 ****** 2026-02-03 06:29:20.351534 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:29:20.351540 | orchestrator | 2026-02-03 06:29:20.351545 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-03 06:29:20.351550 | orchestrator | Tuesday 03 February 2026 06:29:16 +0000 (0:00:01.257) 0:34:30.074 ****** 2026-02-03 06:29:20.351560 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-03 06:29:20.351566 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-03 06:29:20.351572 | orchestrator | 2026-02-03 06:29:20.351577 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-03 06:29:20.351586 | orchestrator | Tuesday 03 February 2026 06:29:20 +0000 (0:00:03.447) 0:34:33.522 ****** 2026-02-03 06:30:06.831571 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-03 06:30:06.831720 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-03 06:30:06.831748 | orchestrator | 2026-02-03 06:30:06.831769 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-03 06:30:06.831790 | orchestrator | Tuesday 03 February 2026 06:29:22 +0000 (0:00:02.142) 0:34:35.664 ****** 2026-02-03 06:30:06.831808 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:30:06.831827 | orchestrator | 2026-02-03 06:30:06.831910 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-03 06:30:06.831933 | orchestrator | Tuesday 03 February 2026 06:29:24 +0000 (0:00:01.605) 0:34:37.269 ****** 2026-02-03 06:30:06.831952 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:30:06.831971 | orchestrator | 2026-02-03 06:30:06.831992 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-03 06:30:06.832011 | orchestrator | Tuesday 03 February 2026 06:29:24 +0000 (0:00:00.809) 0:34:38.080 ****** 2026-02-03 06:30:06.832029 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:30:06.832050 | orchestrator | 2026-02-03 06:30:06.832068 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-03 06:30:06.832089 | orchestrator | Tuesday 03 February 2026 06:29:25 +0000 (0:00:00.910) 0:34:38.990 ****** 2026-02-03 06:30:06.832108 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-03 06:30:06.832126 | orchestrator | 2026-02-03 06:30:06.832143 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-03 06:30:06.832159 | orchestrator | Tuesday 03 February 2026 06:29:27 +0000 (0:00:01.604) 0:34:40.595 ****** 2026-02-03 06:30:06.832178 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:30:06.832197 | orchestrator | 2026-02-03 06:30:06.832218 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-03 06:30:06.832239 | orchestrator | Tuesday 03 February 2026 06:29:28 +0000 (0:00:01.198) 0:34:41.793 ****** 2026-02-03 06:30:06.832258 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:30:06.832276 | orchestrator | 2026-02-03 06:30:06.832290 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-03 06:30:06.832302 | orchestrator | Tuesday 03 February 2026 06:29:29 +0000 (0:00:01.157) 0:34:42.951 ****** 2026-02-03 06:30:06.832313 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-03 06:30:06.832323 | orchestrator | 2026-02-03 06:30:06.832334 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-03 06:30:06.832345 | orchestrator | Tuesday 03 February 2026 06:29:31 +0000 (0:00:01.322) 0:34:44.274 ****** 2026-02-03 06:30:06.832356 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:30:06.832367 | orchestrator | 2026-02-03 06:30:06.832377 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-03 06:30:06.832388 | orchestrator | Tuesday 03 February 2026 06:29:33 +0000 (0:00:02.281) 0:34:46.556 ****** 2026-02-03 06:30:06.832399 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:30:06.832410 | orchestrator | 2026-02-03 06:30:06.832420 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-03 06:30:06.832432 | orchestrator | Tuesday 03 February 2026 06:29:35 +0000 (0:00:02.113) 0:34:48.669 ****** 2026-02-03 06:30:06.832443 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:30:06.832453 | orchestrator | 2026-02-03 06:30:06.832464 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-03 06:30:06.832476 | orchestrator | Tuesday 03 February 2026 06:29:37 +0000 (0:00:02.490) 0:34:51.160 ****** 2026-02-03 06:30:06.832487 | orchestrator | changed: [testbed-node-2] 2026-02-03 06:30:06.832525 | orchestrator | 2026-02-03 06:30:06.832536 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-03 06:30:06.832547 | orchestrator | Tuesday 03 February 2026 06:29:41 +0000 (0:00:03.796) 0:34:54.956 ****** 2026-02-03 06:30:06.832558 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-03 06:30:06.832569 | orchestrator | 2026-02-03 06:30:06.832580 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-03 06:30:06.832590 | orchestrator | Tuesday 03 February 2026 06:29:43 +0000 (0:00:01.576) 0:34:56.533 ****** 2026-02-03 06:30:06.832602 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:30:06.832613 | orchestrator | 2026-02-03 06:30:06.832624 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-03 06:30:06.832635 | orchestrator | Tuesday 03 February 2026 06:29:46 +0000 (0:00:02.796) 0:34:59.329 ****** 2026-02-03 06:30:06.832646 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:30:06.832657 | orchestrator | 2026-02-03 06:30:06.832668 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-03 06:30:06.832678 | orchestrator | Tuesday 03 February 2026 06:29:49 +0000 (0:00:03.015) 0:35:02.345 ****** 2026-02-03 06:30:06.832689 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:30:06.832700 | orchestrator | 2026-02-03 06:30:06.832710 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-03 06:30:06.832721 | orchestrator | Tuesday 03 February 2026 06:29:51 +0000 (0:00:02.156) 0:35:04.501 ****** 2026-02-03 06:30:06.832732 | orchestrator | ok: [testbed-node-2] 2026-02-03 06:30:06.832743 | orchestrator | 2026-02-03 06:30:06.832754 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-03 06:30:06.832764 | orchestrator | Tuesday 03 February 2026 06:29:52 +0000 (0:00:01.221) 0:35:05.723 ****** 2026-02-03 06:30:06.832775 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-03 06:30:06.832786 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-03 06:30:06.832797 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:30:06.832808 | orchestrator | 2026-02-03 06:30:06.832819 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-03 06:30:06.832830 | orchestrator | Tuesday 03 February 2026 06:29:53 +0000 (0:00:01.375) 0:35:07.098 ****** 2026-02-03 06:30:06.832840 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-03 06:30:06.832890 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-03 06:30:06.832923 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-03 06:30:06.832935 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-03 06:30:06.832946 | orchestrator | skipping: [testbed-node-2] 2026-02-03 06:30:06.832958 | orchestrator | 2026-02-03 06:30:06.832969 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-03 06:30:06.832980 | orchestrator | 2026-02-03 06:30:06.832990 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:30:06.833002 | orchestrator | Tuesday 03 February 2026 06:29:56 +0000 (0:00:02.245) 0:35:09.344 ****** 2026-02-03 06:30:06.833013 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:30:06.833024 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:30:06.833035 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:30:06.833046 | orchestrator | 2026-02-03 06:30:06.833057 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:30:06.833111 | orchestrator | Tuesday 03 February 2026 06:29:57 +0000 (0:00:01.836) 0:35:11.180 ****** 2026-02-03 06:30:06.833124 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:30:06.833135 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:30:06.833145 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:30:06.833230 | orchestrator | 2026-02-03 06:30:06.833244 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-03 06:30:06.833255 | orchestrator | Tuesday 03 February 2026 06:29:59 +0000 (0:00:01.822) 0:35:13.003 ****** 2026-02-03 06:30:06.833278 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:30:06.833289 | orchestrator | 2026-02-03 06:30:06.833300 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-03 06:30:06.833311 | orchestrator | Tuesday 03 February 2026 06:30:02 +0000 (0:00:03.170) 0:35:16.173 ****** 2026-02-03 06:30:06.833322 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:30:06.833333 | orchestrator | 2026-02-03 06:30:06.833343 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-03 06:30:06.833354 | orchestrator | Tuesday 03 February 2026 06:30:06 +0000 (0:00:03.196) 0:35:19.369 ****** 2026-02-03 06:30:06.833378 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-03T03:45:47.888101+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:06.833409 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-03T03:47:02.178413+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:07.750078 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-03T03:47:05.656785+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '70', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:07.750179 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-03T03:48:05.555913+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '81', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '77', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:07.750215 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-03T03:48:12.473312+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '81', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '77', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:07.750225 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-03T03:48:18.139571+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '81', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '79', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:07.750252 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-03T03:48:24.596108+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '199', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '79', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:08.270369 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-03T03:48:30.778595+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '87', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '81', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:08.270536 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-03T03:48:43.291244+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '87', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '81', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:08.270578 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-03T03:49:31.077110+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '111', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 111, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:08.270605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-03T03:49:40.513198+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '118', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 118, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:30:08.270627 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-03T03:49:49.609540+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '211', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 211, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:31:46.675594 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-03T03:49:58.037165+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '133', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 133, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:31:46.675697 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-03T03:50:06.627723+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '142', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 142, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-03 06:31:46.675730 | orchestrator | 2026-02-03 06:31:46.675756 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-03 06:31:46.675766 | orchestrator | Tuesday 03 February 2026 06:30:09 +0000 (0:00:03.138) 0:35:22.508 ****** 2026-02-03 06:31:46.675774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:31:46.675780 | orchestrator | 2026-02-03 06:31:46.675787 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-03 06:31:46.675793 | orchestrator | Tuesday 03 February 2026 06:30:12 +0000 (0:00:03.343) 0:35:25.851 ****** 2026-02-03 06:31:46.675801 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-03 06:31:46.675810 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-03 06:31:46.675818 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-03 06:31:46.675883 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-03 06:31:46.675899 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-03 06:31:46.675906 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-03 06:31:46.675914 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-03 06:31:46.675921 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-03 06:31:46.675928 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-03 06:31:46.675936 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-03 06:31:46.675943 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-03 06:31:46.675951 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-03 06:31:46.675958 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-03 06:31:46.675965 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-03 06:31:46.675973 | orchestrator | 2026-02-03 06:31:46.675980 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-03 06:31:46.675987 | orchestrator | Tuesday 03 February 2026 06:31:29 +0000 (0:01:16.382) 0:36:42.234 ****** 2026-02-03 06:31:46.675994 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-03 06:31:46.676002 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-03 06:31:46.676009 | orchestrator | 2026-02-03 06:31:46.676016 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-03 06:31:46.676024 | orchestrator | 2026-02-03 06:31:46.676036 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:31:46.676043 | orchestrator | Tuesday 03 February 2026 06:31:34 +0000 (0:00:05.884) 0:36:48.118 ****** 2026-02-03 06:31:46.676050 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-03 06:31:46.676057 | orchestrator | 2026-02-03 06:31:46.676065 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:31:46.676072 | orchestrator | Tuesday 03 February 2026 06:31:36 +0000 (0:00:01.503) 0:36:49.622 ****** 2026-02-03 06:31:46.676080 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:31:46.676088 | orchestrator | 2026-02-03 06:31:46.676096 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:31:46.676104 | orchestrator | Tuesday 03 February 2026 06:31:37 +0000 (0:00:01.504) 0:36:51.127 ****** 2026-02-03 06:31:46.676112 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:31:46.676120 | orchestrator | 2026-02-03 06:31:46.676128 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:31:46.676136 | orchestrator | Tuesday 03 February 2026 06:31:39 +0000 (0:00:01.150) 0:36:52.278 ****** 2026-02-03 06:31:46.676144 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:31:46.676152 | orchestrator | 2026-02-03 06:31:46.676159 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:31:46.676167 | orchestrator | Tuesday 03 February 2026 06:31:40 +0000 (0:00:01.492) 0:36:53.770 ****** 2026-02-03 06:31:46.676175 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:31:46.676183 | orchestrator | 2026-02-03 06:31:46.676191 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:31:46.676199 | orchestrator | Tuesday 03 February 2026 06:31:41 +0000 (0:00:01.179) 0:36:54.950 ****** 2026-02-03 06:31:46.676207 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:31:46.676215 | orchestrator | 2026-02-03 06:31:46.676222 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:31:46.676230 | orchestrator | Tuesday 03 February 2026 06:31:42 +0000 (0:00:01.205) 0:36:56.156 ****** 2026-02-03 06:31:46.676238 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:31:46.676246 | orchestrator | 2026-02-03 06:31:46.676254 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:31:46.676262 | orchestrator | Tuesday 03 February 2026 06:31:44 +0000 (0:00:01.248) 0:36:57.405 ****** 2026-02-03 06:31:46.676270 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:31:46.676278 | orchestrator | 2026-02-03 06:31:46.676286 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:31:46.676293 | orchestrator | Tuesday 03 February 2026 06:31:45 +0000 (0:00:01.232) 0:36:58.637 ****** 2026-02-03 06:31:46.676300 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:31:46.676307 | orchestrator | 2026-02-03 06:31:46.676319 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:32:13.997423 | orchestrator | Tuesday 03 February 2026 06:31:46 +0000 (0:00:01.213) 0:36:59.851 ****** 2026-02-03 06:32:13.997572 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:32:13.997593 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:32:13.997613 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:32:13.997634 | orchestrator | 2026-02-03 06:32:13.997655 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:32:13.997675 | orchestrator | Tuesday 03 February 2026 06:31:48 +0000 (0:00:02.087) 0:37:01.938 ****** 2026-02-03 06:32:13.997698 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:32:13.997711 | orchestrator | 2026-02-03 06:32:13.997723 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:32:13.997734 | orchestrator | Tuesday 03 February 2026 06:31:50 +0000 (0:00:01.357) 0:37:03.296 ****** 2026-02-03 06:32:13.997745 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:32:13.997813 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:32:13.997883 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:32:13.997902 | orchestrator | 2026-02-03 06:32:13.997921 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:32:13.997940 | orchestrator | Tuesday 03 February 2026 06:31:53 +0000 (0:00:03.481) 0:37:06.777 ****** 2026-02-03 06:32:13.997960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 06:32:13.997980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 06:32:13.997993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 06:32:13.998006 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:13.998076 | orchestrator | 2026-02-03 06:32:13.998090 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:32:13.998104 | orchestrator | Tuesday 03 February 2026 06:31:55 +0000 (0:00:02.114) 0:37:08.892 ****** 2026-02-03 06:32:13.998120 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:32:13.998136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:32:13.998149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:32:13.998163 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:13.998177 | orchestrator | 2026-02-03 06:32:13.998189 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:32:13.998199 | orchestrator | Tuesday 03 February 2026 06:31:57 +0000 (0:00:02.144) 0:37:11.037 ****** 2026-02-03 06:32:13.998213 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:13.998228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:13.998241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:13.998261 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:13.998280 | orchestrator | 2026-02-03 06:32:13.998299 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:32:13.998317 | orchestrator | Tuesday 03 February 2026 06:31:59 +0000 (0:00:01.430) 0:37:12.467 ****** 2026-02-03 06:32:13.998352 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:31:50.680487', 'end': '2026-02-03 06:31:50.723654', 'delta': '0:00:00.043167', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:32:13.998387 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:31:51.698897', 'end': '2026-02-03 06:31:51.742394', 'delta': '0:00:00.043497', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:32:13.998399 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:31:52.301382', 'end': '2026-02-03 06:31:52.358733', 'delta': '0:00:00.057351', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:32:13.998411 | orchestrator | 2026-02-03 06:32:13.998422 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:32:13.998433 | orchestrator | Tuesday 03 February 2026 06:32:00 +0000 (0:00:01.295) 0:37:13.763 ****** 2026-02-03 06:32:13.998444 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:32:13.998455 | orchestrator | 2026-02-03 06:32:13.998466 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:32:13.998477 | orchestrator | Tuesday 03 February 2026 06:32:01 +0000 (0:00:01.309) 0:37:15.073 ****** 2026-02-03 06:32:13.998488 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:13.998499 | orchestrator | 2026-02-03 06:32:13.998509 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:32:13.998520 | orchestrator | Tuesday 03 February 2026 06:32:03 +0000 (0:00:01.367) 0:37:16.440 ****** 2026-02-03 06:32:13.998531 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:32:13.998542 | orchestrator | 2026-02-03 06:32:13.998552 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:32:13.998563 | orchestrator | Tuesday 03 February 2026 06:32:04 +0000 (0:00:01.205) 0:37:17.646 ****** 2026-02-03 06:32:13.998574 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:32:13.998585 | orchestrator | 2026-02-03 06:32:13.998596 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:32:13.998607 | orchestrator | Tuesday 03 February 2026 06:32:06 +0000 (0:00:02.096) 0:37:19.742 ****** 2026-02-03 06:32:13.998617 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:32:13.998628 | orchestrator | 2026-02-03 06:32:13.998639 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:32:13.998649 | orchestrator | Tuesday 03 February 2026 06:32:07 +0000 (0:00:01.246) 0:37:20.989 ****** 2026-02-03 06:32:13.998660 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:13.998678 | orchestrator | 2026-02-03 06:32:13.998689 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:32:13.998700 | orchestrator | Tuesday 03 February 2026 06:32:09 +0000 (0:00:01.202) 0:37:22.191 ****** 2026-02-03 06:32:13.998710 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:13.998721 | orchestrator | 2026-02-03 06:32:13.998732 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:32:13.998743 | orchestrator | Tuesday 03 February 2026 06:32:10 +0000 (0:00:01.274) 0:37:23.466 ****** 2026-02-03 06:32:13.998754 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:13.998765 | orchestrator | 2026-02-03 06:32:13.998775 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:32:13.998786 | orchestrator | Tuesday 03 February 2026 06:32:11 +0000 (0:00:01.207) 0:37:24.674 ****** 2026-02-03 06:32:13.998797 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:13.998807 | orchestrator | 2026-02-03 06:32:13.998845 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:32:13.998856 | orchestrator | Tuesday 03 February 2026 06:32:12 +0000 (0:00:01.209) 0:37:25.883 ****** 2026-02-03 06:32:13.998875 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:32:19.268232 | orchestrator | 2026-02-03 06:32:19.268342 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:32:19.268360 | orchestrator | Tuesday 03 February 2026 06:32:13 +0000 (0:00:01.288) 0:37:27.172 ****** 2026-02-03 06:32:19.268372 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:19.268385 | orchestrator | 2026-02-03 06:32:19.268396 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:32:19.268408 | orchestrator | Tuesday 03 February 2026 06:32:15 +0000 (0:00:01.308) 0:37:28.481 ****** 2026-02-03 06:32:19.268420 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:32:19.268432 | orchestrator | 2026-02-03 06:32:19.268444 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:32:19.268455 | orchestrator | Tuesday 03 February 2026 06:32:16 +0000 (0:00:01.277) 0:37:29.758 ****** 2026-02-03 06:32:19.268466 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:19.268477 | orchestrator | 2026-02-03 06:32:19.268488 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:32:19.268501 | orchestrator | Tuesday 03 February 2026 06:32:17 +0000 (0:00:01.172) 0:37:30.930 ****** 2026-02-03 06:32:19.268512 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:32:19.268523 | orchestrator | 2026-02-03 06:32:19.268551 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:32:19.268563 | orchestrator | Tuesday 03 February 2026 06:32:18 +0000 (0:00:01.247) 0:37:32.178 ****** 2026-02-03 06:32:19.268577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:32:19.268593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'uuids': ['027247ae-00a3-443e-9633-8d8391a7da1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T']}})  2026-02-03 06:32:19.268609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '30942d1f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:32:19.268646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29']}})  2026-02-03 06:32:19.268660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:32:19.268691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:32:19.268710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:32:19.268723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:32:19.268735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp', 'dm-uuid-CRYPT-LUKS2-51cdba44ba2f44e4a9ba680ba42622f2-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:32:19.268747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:32:19.268769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'uuids': ['51cdba44-ba2f-44e4-a9ba-680ba42622f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp']}})  2026-02-03 06:32:19.268783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd']}})  2026-02-03 06:32:19.268803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:32:20.770429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26fa6d1d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:32:20.770561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:32:20.770581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:32:20.770595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T', 'dm-uuid-CRYPT-LUKS2-027247ae00a3443e96338d8391a7da1a-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:32:20.770609 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:32:20.770622 | orchestrator | 2026-02-03 06:32:20.770634 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:32:20.770647 | orchestrator | Tuesday 03 February 2026 06:32:20 +0000 (0:00:01.508) 0:37:33.687 ****** 2026-02-03 06:32:20.770677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:20.770698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'uuids': ['027247ae-00a3-443e-9633-8d8391a7da1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:20.770711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '30942d1f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:20.770731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:20.770744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:20.770763 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.047994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.048117 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.048161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp', 'dm-uuid-CRYPT-LUKS2-51cdba44ba2f44e4a9ba680ba42622f2-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.048174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.048187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'uuids': ['51cdba44-ba2f-44e4-a9ba-680ba42622f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.048227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.048244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.048267 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26fa6d1d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.048280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:32:22.048305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:33:02.226688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T', 'dm-uuid-CRYPT-LUKS2-027247ae00a3443e96338d8391a7da1a-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:33:02.226867 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.226888 | orchestrator | 2026-02-03 06:33:02.226900 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:33:02.226912 | orchestrator | Tuesday 03 February 2026 06:32:22 +0000 (0:00:01.537) 0:37:35.225 ****** 2026-02-03 06:33:02.226921 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:02.226932 | orchestrator | 2026-02-03 06:33:02.226942 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:33:02.226952 | orchestrator | Tuesday 03 February 2026 06:32:23 +0000 (0:00:01.605) 0:37:36.830 ****** 2026-02-03 06:33:02.226961 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:02.226971 | orchestrator | 2026-02-03 06:33:02.226981 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:33:02.226990 | orchestrator | Tuesday 03 February 2026 06:32:24 +0000 (0:00:01.215) 0:37:38.045 ****** 2026-02-03 06:33:02.227000 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:02.227009 | orchestrator | 2026-02-03 06:33:02.227019 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:33:02.227028 | orchestrator | Tuesday 03 February 2026 06:32:26 +0000 (0:00:01.567) 0:37:39.613 ****** 2026-02-03 06:33:02.227038 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227048 | orchestrator | 2026-02-03 06:33:02.227057 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:33:02.227066 | orchestrator | Tuesday 03 February 2026 06:32:27 +0000 (0:00:01.191) 0:37:40.804 ****** 2026-02-03 06:33:02.227076 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227086 | orchestrator | 2026-02-03 06:33:02.227095 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:33:02.227105 | orchestrator | Tuesday 03 February 2026 06:32:28 +0000 (0:00:01.338) 0:37:42.143 ****** 2026-02-03 06:33:02.227114 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227124 | orchestrator | 2026-02-03 06:33:02.227133 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:33:02.227143 | orchestrator | Tuesday 03 February 2026 06:32:30 +0000 (0:00:01.281) 0:37:43.424 ****** 2026-02-03 06:33:02.227152 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-03 06:33:02.227162 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-03 06:33:02.227172 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-03 06:33:02.227182 | orchestrator | 2026-02-03 06:33:02.227191 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:33:02.227201 | orchestrator | Tuesday 03 February 2026 06:32:32 +0000 (0:00:02.315) 0:37:45.739 ****** 2026-02-03 06:33:02.227211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 06:33:02.227221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 06:33:02.227230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 06:33:02.227240 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227250 | orchestrator | 2026-02-03 06:33:02.227259 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:33:02.227269 | orchestrator | Tuesday 03 February 2026 06:32:33 +0000 (0:00:01.339) 0:37:47.080 ****** 2026-02-03 06:33:02.227278 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-03 06:33:02.227289 | orchestrator | 2026-02-03 06:33:02.227299 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:33:02.227309 | orchestrator | Tuesday 03 February 2026 06:32:35 +0000 (0:00:01.246) 0:37:48.326 ****** 2026-02-03 06:33:02.227327 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227337 | orchestrator | 2026-02-03 06:33:02.227347 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:33:02.227356 | orchestrator | Tuesday 03 February 2026 06:32:36 +0000 (0:00:01.262) 0:37:49.589 ****** 2026-02-03 06:33:02.227366 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227375 | orchestrator | 2026-02-03 06:33:02.227385 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:33:02.227394 | orchestrator | Tuesday 03 February 2026 06:32:37 +0000 (0:00:01.171) 0:37:50.761 ****** 2026-02-03 06:33:02.227404 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227413 | orchestrator | 2026-02-03 06:33:02.227423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:33:02.227432 | orchestrator | Tuesday 03 February 2026 06:32:38 +0000 (0:00:01.208) 0:37:51.969 ****** 2026-02-03 06:33:02.227442 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:02.227451 | orchestrator | 2026-02-03 06:33:02.227461 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:33:02.227470 | orchestrator | Tuesday 03 February 2026 06:32:40 +0000 (0:00:01.292) 0:37:53.262 ****** 2026-02-03 06:33:02.227495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:33:02.227522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:33:02.227532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:33:02.227542 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227552 | orchestrator | 2026-02-03 06:33:02.227561 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:33:02.227571 | orchestrator | Tuesday 03 February 2026 06:32:41 +0000 (0:00:01.466) 0:37:54.729 ****** 2026-02-03 06:33:02.227581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:33:02.227590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:33:02.227600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:33:02.227609 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227619 | orchestrator | 2026-02-03 06:33:02.227628 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:33:02.227638 | orchestrator | Tuesday 03 February 2026 06:32:43 +0000 (0:00:01.490) 0:37:56.220 ****** 2026-02-03 06:33:02.227647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:33:02.227657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:33:02.227667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:33:02.227676 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:02.227686 | orchestrator | 2026-02-03 06:33:02.227695 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:33:02.227705 | orchestrator | Tuesday 03 February 2026 06:32:44 +0000 (0:00:01.460) 0:37:57.680 ****** 2026-02-03 06:33:02.227715 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:02.227724 | orchestrator | 2026-02-03 06:33:02.227734 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:33:02.227743 | orchestrator | Tuesday 03 February 2026 06:32:45 +0000 (0:00:01.224) 0:37:58.905 ****** 2026-02-03 06:33:02.227753 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 06:33:02.227763 | orchestrator | 2026-02-03 06:33:02.227773 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:33:02.227782 | orchestrator | Tuesday 03 February 2026 06:32:47 +0000 (0:00:01.400) 0:38:00.305 ****** 2026-02-03 06:33:02.227792 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:33:02.227802 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:33:02.227835 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:33:02.227852 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 06:33:02.227862 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:33:02.227871 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:33:02.227881 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:33:02.227890 | orchestrator | 2026-02-03 06:33:02.227900 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:33:02.227909 | orchestrator | Tuesday 03 February 2026 06:32:49 +0000 (0:00:02.319) 0:38:02.625 ****** 2026-02-03 06:33:02.227919 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:33:02.227929 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:33:02.227938 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:33:02.227948 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 06:33:02.227957 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:33:02.227967 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:33:02.227976 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:33:02.227986 | orchestrator | 2026-02-03 06:33:02.227995 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-03 06:33:02.228005 | orchestrator | Tuesday 03 February 2026 06:32:52 +0000 (0:00:03.239) 0:38:05.865 ****** 2026-02-03 06:33:02.228014 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:02.228024 | orchestrator | 2026-02-03 06:33:02.228033 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-03 06:33:02.228043 | orchestrator | Tuesday 03 February 2026 06:32:54 +0000 (0:00:01.509) 0:38:07.374 ****** 2026-02-03 06:33:02.228052 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:02.228062 | orchestrator | 2026-02-03 06:33:02.228071 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-03 06:33:02.228081 | orchestrator | Tuesday 03 February 2026 06:32:55 +0000 (0:00:01.237) 0:38:08.612 ****** 2026-02-03 06:33:02.228091 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:02.228100 | orchestrator | 2026-02-03 06:33:02.228110 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-03 06:33:02.228119 | orchestrator | Tuesday 03 February 2026 06:32:56 +0000 (0:00:01.338) 0:38:09.951 ****** 2026-02-03 06:33:02.228129 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-03 06:33:02.228139 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-03 06:33:02.228148 | orchestrator | 2026-02-03 06:33:02.228158 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:33:02.228167 | orchestrator | Tuesday 03 February 2026 06:33:01 +0000 (0:00:04.259) 0:38:14.210 ****** 2026-02-03 06:33:02.228177 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-03 06:33:02.228187 | orchestrator | 2026-02-03 06:33:02.228196 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:33:02.228218 | orchestrator | Tuesday 03 February 2026 06:33:02 +0000 (0:00:01.191) 0:38:15.402 ****** 2026-02-03 06:33:55.835221 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-03 06:33:55.835326 | orchestrator | 2026-02-03 06:33:55.835337 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:33:55.835344 | orchestrator | Tuesday 03 February 2026 06:33:03 +0000 (0:00:01.214) 0:38:16.616 ****** 2026-02-03 06:33:55.835351 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835359 | orchestrator | 2026-02-03 06:33:55.835365 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:33:55.835394 | orchestrator | Tuesday 03 February 2026 06:33:04 +0000 (0:00:01.200) 0:38:17.817 ****** 2026-02-03 06:33:55.835400 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835407 | orchestrator | 2026-02-03 06:33:55.835414 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:33:55.835420 | orchestrator | Tuesday 03 February 2026 06:33:06 +0000 (0:00:01.696) 0:38:19.514 ****** 2026-02-03 06:33:55.835425 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835431 | orchestrator | 2026-02-03 06:33:55.835437 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:33:55.835443 | orchestrator | Tuesday 03 February 2026 06:33:07 +0000 (0:00:01.615) 0:38:21.129 ****** 2026-02-03 06:33:55.835450 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835456 | orchestrator | 2026-02-03 06:33:55.835462 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:33:55.835469 | orchestrator | Tuesday 03 February 2026 06:33:09 +0000 (0:00:01.798) 0:38:22.927 ****** 2026-02-03 06:33:55.835474 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835480 | orchestrator | 2026-02-03 06:33:55.835486 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:33:55.835492 | orchestrator | Tuesday 03 February 2026 06:33:10 +0000 (0:00:01.186) 0:38:24.114 ****** 2026-02-03 06:33:55.835499 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835505 | orchestrator | 2026-02-03 06:33:55.835511 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:33:55.835517 | orchestrator | Tuesday 03 February 2026 06:33:12 +0000 (0:00:01.199) 0:38:25.313 ****** 2026-02-03 06:33:55.835523 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835529 | orchestrator | 2026-02-03 06:33:55.835535 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:33:55.835542 | orchestrator | Tuesday 03 February 2026 06:33:13 +0000 (0:00:01.142) 0:38:26.456 ****** 2026-02-03 06:33:55.835548 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835554 | orchestrator | 2026-02-03 06:33:55.835560 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:33:55.835566 | orchestrator | Tuesday 03 February 2026 06:33:14 +0000 (0:00:01.539) 0:38:27.996 ****** 2026-02-03 06:33:55.835572 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835579 | orchestrator | 2026-02-03 06:33:55.835585 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:33:55.835591 | orchestrator | Tuesday 03 February 2026 06:33:16 +0000 (0:00:01.622) 0:38:29.618 ****** 2026-02-03 06:33:55.835597 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835603 | orchestrator | 2026-02-03 06:33:55.835609 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:33:55.835615 | orchestrator | Tuesday 03 February 2026 06:33:17 +0000 (0:00:01.226) 0:38:30.845 ****** 2026-02-03 06:33:55.835624 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835632 | orchestrator | 2026-02-03 06:33:55.835638 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:33:55.835645 | orchestrator | Tuesday 03 February 2026 06:33:18 +0000 (0:00:01.200) 0:38:32.046 ****** 2026-02-03 06:33:55.835651 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835657 | orchestrator | 2026-02-03 06:33:55.835664 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:33:55.835670 | orchestrator | Tuesday 03 February 2026 06:33:20 +0000 (0:00:01.227) 0:38:33.274 ****** 2026-02-03 06:33:55.835677 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835683 | orchestrator | 2026-02-03 06:33:55.835690 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:33:55.835698 | orchestrator | Tuesday 03 February 2026 06:33:21 +0000 (0:00:01.190) 0:38:34.464 ****** 2026-02-03 06:33:55.835703 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835710 | orchestrator | 2026-02-03 06:33:55.835716 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:33:55.835722 | orchestrator | Tuesday 03 February 2026 06:33:22 +0000 (0:00:01.201) 0:38:35.666 ****** 2026-02-03 06:33:55.835734 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835740 | orchestrator | 2026-02-03 06:33:55.835746 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:33:55.835752 | orchestrator | Tuesday 03 February 2026 06:33:23 +0000 (0:00:01.190) 0:38:36.856 ****** 2026-02-03 06:33:55.835758 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835763 | orchestrator | 2026-02-03 06:33:55.835770 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:33:55.835776 | orchestrator | Tuesday 03 February 2026 06:33:24 +0000 (0:00:01.275) 0:38:38.132 ****** 2026-02-03 06:33:55.835783 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835789 | orchestrator | 2026-02-03 06:33:55.835795 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:33:55.835826 | orchestrator | Tuesday 03 February 2026 06:33:26 +0000 (0:00:01.244) 0:38:39.376 ****** 2026-02-03 06:33:55.835833 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835839 | orchestrator | 2026-02-03 06:33:55.835845 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:33:55.835851 | orchestrator | Tuesday 03 February 2026 06:33:27 +0000 (0:00:01.331) 0:38:40.708 ****** 2026-02-03 06:33:55.835858 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.835864 | orchestrator | 2026-02-03 06:33:55.835870 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:33:55.835876 | orchestrator | Tuesday 03 February 2026 06:33:28 +0000 (0:00:01.249) 0:38:41.957 ****** 2026-02-03 06:33:55.835895 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835901 | orchestrator | 2026-02-03 06:33:55.835924 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:33:55.835931 | orchestrator | Tuesday 03 February 2026 06:33:29 +0000 (0:00:01.163) 0:38:43.121 ****** 2026-02-03 06:33:55.835937 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835943 | orchestrator | 2026-02-03 06:33:55.835949 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:33:55.835955 | orchestrator | Tuesday 03 February 2026 06:33:31 +0000 (0:00:01.185) 0:38:44.306 ****** 2026-02-03 06:33:55.835961 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835967 | orchestrator | 2026-02-03 06:33:55.835973 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:33:55.835979 | orchestrator | Tuesday 03 February 2026 06:33:32 +0000 (0:00:01.211) 0:38:45.518 ****** 2026-02-03 06:33:55.835985 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.835992 | orchestrator | 2026-02-03 06:33:55.835998 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:33:55.836005 | orchestrator | Tuesday 03 February 2026 06:33:33 +0000 (0:00:01.223) 0:38:46.742 ****** 2026-02-03 06:33:55.836011 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836018 | orchestrator | 2026-02-03 06:33:55.836024 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:33:55.836031 | orchestrator | Tuesday 03 February 2026 06:33:34 +0000 (0:00:01.192) 0:38:47.934 ****** 2026-02-03 06:33:55.836037 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836043 | orchestrator | 2026-02-03 06:33:55.836051 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:33:55.836057 | orchestrator | Tuesday 03 February 2026 06:33:35 +0000 (0:00:01.227) 0:38:49.161 ****** 2026-02-03 06:33:55.836063 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836069 | orchestrator | 2026-02-03 06:33:55.836075 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:33:55.836083 | orchestrator | Tuesday 03 February 2026 06:33:37 +0000 (0:00:01.254) 0:38:50.416 ****** 2026-02-03 06:33:55.836090 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836096 | orchestrator | 2026-02-03 06:33:55.836102 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:33:55.836115 | orchestrator | Tuesday 03 February 2026 06:33:38 +0000 (0:00:01.230) 0:38:51.647 ****** 2026-02-03 06:33:55.836121 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836128 | orchestrator | 2026-02-03 06:33:55.836134 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:33:55.836140 | orchestrator | Tuesday 03 February 2026 06:33:39 +0000 (0:00:01.163) 0:38:52.811 ****** 2026-02-03 06:33:55.836146 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836152 | orchestrator | 2026-02-03 06:33:55.836157 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:33:55.836164 | orchestrator | Tuesday 03 February 2026 06:33:40 +0000 (0:00:01.204) 0:38:54.016 ****** 2026-02-03 06:33:55.836169 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836175 | orchestrator | 2026-02-03 06:33:55.836182 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:33:55.836187 | orchestrator | Tuesday 03 February 2026 06:33:42 +0000 (0:00:01.259) 0:38:55.276 ****** 2026-02-03 06:33:55.836193 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836199 | orchestrator | 2026-02-03 06:33:55.836205 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:33:55.836211 | orchestrator | Tuesday 03 February 2026 06:33:43 +0000 (0:00:01.184) 0:38:56.460 ****** 2026-02-03 06:33:55.836217 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.836223 | orchestrator | 2026-02-03 06:33:55.836229 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:33:55.836235 | orchestrator | Tuesday 03 February 2026 06:33:45 +0000 (0:00:02.031) 0:38:58.492 ****** 2026-02-03 06:33:55.836241 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.836247 | orchestrator | 2026-02-03 06:33:55.836254 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:33:55.836259 | orchestrator | Tuesday 03 February 2026 06:33:47 +0000 (0:00:02.324) 0:39:00.816 ****** 2026-02-03 06:33:55.836266 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-03 06:33:55.836273 | orchestrator | 2026-02-03 06:33:55.836278 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:33:55.836284 | orchestrator | Tuesday 03 February 2026 06:33:48 +0000 (0:00:01.173) 0:39:01.990 ****** 2026-02-03 06:33:55.836290 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836297 | orchestrator | 2026-02-03 06:33:55.836303 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:33:55.836309 | orchestrator | Tuesday 03 February 2026 06:33:49 +0000 (0:00:01.166) 0:39:03.156 ****** 2026-02-03 06:33:55.836315 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836321 | orchestrator | 2026-02-03 06:33:55.836327 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:33:55.836333 | orchestrator | Tuesday 03 February 2026 06:33:51 +0000 (0:00:01.168) 0:39:04.325 ****** 2026-02-03 06:33:55.836339 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:33:55.836345 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:33:55.836352 | orchestrator | 2026-02-03 06:33:55.836358 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:33:55.836364 | orchestrator | Tuesday 03 February 2026 06:33:53 +0000 (0:00:01.865) 0:39:06.190 ****** 2026-02-03 06:33:55.836370 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:33:55.836376 | orchestrator | 2026-02-03 06:33:55.836382 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:33:55.836388 | orchestrator | Tuesday 03 February 2026 06:33:54 +0000 (0:00:01.516) 0:39:07.707 ****** 2026-02-03 06:33:55.836393 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:33:55.836399 | orchestrator | 2026-02-03 06:33:55.836410 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:33:55.836423 | orchestrator | Tuesday 03 February 2026 06:33:55 +0000 (0:00:01.292) 0:39:09.000 ****** 2026-02-03 06:34:45.564429 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.564547 | orchestrator | 2026-02-03 06:34:45.564564 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:34:45.564578 | orchestrator | Tuesday 03 February 2026 06:33:57 +0000 (0:00:01.196) 0:39:10.197 ****** 2026-02-03 06:34:45.564589 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.564606 | orchestrator | 2026-02-03 06:34:45.564626 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:34:45.564655 | orchestrator | Tuesday 03 February 2026 06:33:58 +0000 (0:00:01.264) 0:39:11.462 ****** 2026-02-03 06:34:45.564676 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-03 06:34:45.564696 | orchestrator | 2026-02-03 06:34:45.564715 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:34:45.564733 | orchestrator | Tuesday 03 February 2026 06:33:59 +0000 (0:00:01.170) 0:39:12.632 ****** 2026-02-03 06:34:45.564752 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:34:45.564772 | orchestrator | 2026-02-03 06:34:45.564848 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:34:45.564871 | orchestrator | Tuesday 03 February 2026 06:34:01 +0000 (0:00:01.806) 0:39:14.439 ****** 2026-02-03 06:34:45.564891 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:34:45.564910 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:34:45.564929 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:34:45.564948 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.564966 | orchestrator | 2026-02-03 06:34:45.564984 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:34:45.565001 | orchestrator | Tuesday 03 February 2026 06:34:02 +0000 (0:00:01.321) 0:39:15.761 ****** 2026-02-03 06:34:45.565020 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565039 | orchestrator | 2026-02-03 06:34:45.565058 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:34:45.565078 | orchestrator | Tuesday 03 February 2026 06:34:03 +0000 (0:00:01.176) 0:39:16.937 ****** 2026-02-03 06:34:45.565092 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565103 | orchestrator | 2026-02-03 06:34:45.565114 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:34:45.565125 | orchestrator | Tuesday 03 February 2026 06:34:04 +0000 (0:00:01.216) 0:39:18.154 ****** 2026-02-03 06:34:45.565136 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565147 | orchestrator | 2026-02-03 06:34:45.565158 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:34:45.565169 | orchestrator | Tuesday 03 February 2026 06:34:06 +0000 (0:00:01.226) 0:39:19.380 ****** 2026-02-03 06:34:45.565179 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565190 | orchestrator | 2026-02-03 06:34:45.565201 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:34:45.565211 | orchestrator | Tuesday 03 February 2026 06:34:07 +0000 (0:00:01.176) 0:39:20.557 ****** 2026-02-03 06:34:45.565222 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565233 | orchestrator | 2026-02-03 06:34:45.565243 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:34:45.565254 | orchestrator | Tuesday 03 February 2026 06:34:08 +0000 (0:00:01.209) 0:39:21.766 ****** 2026-02-03 06:34:45.565265 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:34:45.565275 | orchestrator | 2026-02-03 06:34:45.565286 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:34:45.565297 | orchestrator | Tuesday 03 February 2026 06:34:11 +0000 (0:00:02.763) 0:39:24.530 ****** 2026-02-03 06:34:45.565307 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:34:45.565318 | orchestrator | 2026-02-03 06:34:45.565328 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:34:45.565367 | orchestrator | Tuesday 03 February 2026 06:34:12 +0000 (0:00:01.257) 0:39:25.788 ****** 2026-02-03 06:34:45.565379 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-03 06:34:45.565390 | orchestrator | 2026-02-03 06:34:45.565401 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:34:45.565412 | orchestrator | Tuesday 03 February 2026 06:34:13 +0000 (0:00:01.351) 0:39:27.139 ****** 2026-02-03 06:34:45.565423 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565433 | orchestrator | 2026-02-03 06:34:45.565444 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:34:45.565455 | orchestrator | Tuesday 03 February 2026 06:34:15 +0000 (0:00:01.214) 0:39:28.354 ****** 2026-02-03 06:34:45.565466 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565476 | orchestrator | 2026-02-03 06:34:45.565487 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:34:45.565498 | orchestrator | Tuesday 03 February 2026 06:34:16 +0000 (0:00:01.198) 0:39:29.552 ****** 2026-02-03 06:34:45.565508 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565519 | orchestrator | 2026-02-03 06:34:45.565530 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:34:45.565541 | orchestrator | Tuesday 03 February 2026 06:34:17 +0000 (0:00:01.176) 0:39:30.729 ****** 2026-02-03 06:34:45.565551 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565562 | orchestrator | 2026-02-03 06:34:45.565573 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:34:45.565583 | orchestrator | Tuesday 03 February 2026 06:34:18 +0000 (0:00:01.203) 0:39:31.932 ****** 2026-02-03 06:34:45.565594 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565605 | orchestrator | 2026-02-03 06:34:45.565615 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:34:45.565626 | orchestrator | Tuesday 03 February 2026 06:34:19 +0000 (0:00:01.243) 0:39:33.176 ****** 2026-02-03 06:34:45.565652 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565663 | orchestrator | 2026-02-03 06:34:45.565694 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:34:45.565706 | orchestrator | Tuesday 03 February 2026 06:34:21 +0000 (0:00:01.280) 0:39:34.456 ****** 2026-02-03 06:34:45.565717 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565728 | orchestrator | 2026-02-03 06:34:45.565739 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:34:45.565749 | orchestrator | Tuesday 03 February 2026 06:34:22 +0000 (0:00:01.213) 0:39:35.670 ****** 2026-02-03 06:34:45.565760 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.565771 | orchestrator | 2026-02-03 06:34:45.565782 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:34:45.565823 | orchestrator | Tuesday 03 February 2026 06:34:23 +0000 (0:00:01.329) 0:39:37.000 ****** 2026-02-03 06:34:45.565836 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:34:45.565847 | orchestrator | 2026-02-03 06:34:45.565857 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:34:45.565868 | orchestrator | Tuesday 03 February 2026 06:34:25 +0000 (0:00:01.249) 0:39:38.249 ****** 2026-02-03 06:34:45.565879 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-03 06:34:45.565891 | orchestrator | 2026-02-03 06:34:45.565909 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:34:45.565928 | orchestrator | Tuesday 03 February 2026 06:34:26 +0000 (0:00:01.198) 0:39:39.448 ****** 2026-02-03 06:34:45.565947 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-03 06:34:45.565965 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-03 06:34:45.565983 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-03 06:34:45.566000 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-03 06:34:45.566087 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-03 06:34:45.566117 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-03 06:34:45.566128 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-03 06:34:45.566138 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:34:45.566149 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:34:45.566160 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:34:45.566171 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:34:45.566182 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:34:45.566193 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:34:45.566203 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:34:45.566214 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-03 06:34:45.566225 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-03 06:34:45.566236 | orchestrator | 2026-02-03 06:34:45.566247 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:34:45.566258 | orchestrator | Tuesday 03 February 2026 06:34:32 +0000 (0:00:06.738) 0:39:46.186 ****** 2026-02-03 06:34:45.566269 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-03 06:34:45.566279 | orchestrator | 2026-02-03 06:34:45.566290 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-03 06:34:45.566301 | orchestrator | Tuesday 03 February 2026 06:34:34 +0000 (0:00:01.736) 0:39:47.923 ****** 2026-02-03 06:34:45.566312 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 06:34:45.566324 | orchestrator | 2026-02-03 06:34:45.566335 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-03 06:34:45.566345 | orchestrator | Tuesday 03 February 2026 06:34:36 +0000 (0:00:01.534) 0:39:49.458 ****** 2026-02-03 06:34:45.566358 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 06:34:45.566375 | orchestrator | 2026-02-03 06:34:45.566392 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:34:45.566410 | orchestrator | Tuesday 03 February 2026 06:34:38 +0000 (0:00:02.110) 0:39:51.569 ****** 2026-02-03 06:34:45.566427 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.566446 | orchestrator | 2026-02-03 06:34:45.566464 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:34:45.566477 | orchestrator | Tuesday 03 February 2026 06:34:39 +0000 (0:00:01.220) 0:39:52.789 ****** 2026-02-03 06:34:45.566488 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.566499 | orchestrator | 2026-02-03 06:34:45.566509 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:34:45.566520 | orchestrator | Tuesday 03 February 2026 06:34:40 +0000 (0:00:01.171) 0:39:53.961 ****** 2026-02-03 06:34:45.566530 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.566541 | orchestrator | 2026-02-03 06:34:45.566552 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:34:45.566562 | orchestrator | Tuesday 03 February 2026 06:34:41 +0000 (0:00:01.168) 0:39:55.130 ****** 2026-02-03 06:34:45.566573 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.566584 | orchestrator | 2026-02-03 06:34:45.566594 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:34:45.566605 | orchestrator | Tuesday 03 February 2026 06:34:43 +0000 (0:00:01.168) 0:39:56.298 ****** 2026-02-03 06:34:45.566615 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.566626 | orchestrator | 2026-02-03 06:34:45.566637 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:34:45.566656 | orchestrator | Tuesday 03 February 2026 06:34:44 +0000 (0:00:01.188) 0:39:57.487 ****** 2026-02-03 06:34:45.566674 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:34:45.566685 | orchestrator | 2026-02-03 06:34:45.566709 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:35:39.588057 | orchestrator | Tuesday 03 February 2026 06:34:45 +0000 (0:00:01.250) 0:39:58.738 ****** 2026-02-03 06:35:39.588157 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588171 | orchestrator | 2026-02-03 06:35:39.588181 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:35:39.588190 | orchestrator | Tuesday 03 February 2026 06:34:46 +0000 (0:00:01.234) 0:39:59.972 ****** 2026-02-03 06:35:39.588199 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588207 | orchestrator | 2026-02-03 06:35:39.588215 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:35:39.588224 | orchestrator | Tuesday 03 February 2026 06:34:47 +0000 (0:00:01.165) 0:40:01.138 ****** 2026-02-03 06:35:39.588232 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588240 | orchestrator | 2026-02-03 06:35:39.588248 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:35:39.588256 | orchestrator | Tuesday 03 February 2026 06:34:49 +0000 (0:00:01.180) 0:40:02.319 ****** 2026-02-03 06:35:39.588264 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588272 | orchestrator | 2026-02-03 06:35:39.588280 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:35:39.588288 | orchestrator | Tuesday 03 February 2026 06:34:50 +0000 (0:00:01.299) 0:40:03.619 ****** 2026-02-03 06:35:39.588296 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:35:39.588306 | orchestrator | 2026-02-03 06:35:39.588314 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:35:39.588322 | orchestrator | Tuesday 03 February 2026 06:34:51 +0000 (0:00:01.304) 0:40:04.924 ****** 2026-02-03 06:35:39.588330 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-03 06:35:39.588338 | orchestrator | 2026-02-03 06:35:39.588346 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:35:39.588354 | orchestrator | Tuesday 03 February 2026 06:34:56 +0000 (0:00:04.744) 0:40:09.669 ****** 2026-02-03 06:35:39.588362 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 06:35:39.588371 | orchestrator | 2026-02-03 06:35:39.588379 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:35:39.588387 | orchestrator | Tuesday 03 February 2026 06:34:57 +0000 (0:00:01.222) 0:40:10.891 ****** 2026-02-03 06:35:39.588397 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-03 06:35:39.588408 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-03 06:35:39.588418 | orchestrator | 2026-02-03 06:35:39.588426 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:35:39.588434 | orchestrator | Tuesday 03 February 2026 06:35:05 +0000 (0:00:08.050) 0:40:18.941 ****** 2026-02-03 06:35:39.588442 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588450 | orchestrator | 2026-02-03 06:35:39.588458 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:35:39.588466 | orchestrator | Tuesday 03 February 2026 06:35:07 +0000 (0:00:01.246) 0:40:20.188 ****** 2026-02-03 06:35:39.588494 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588503 | orchestrator | 2026-02-03 06:35:39.588511 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:35:39.588519 | orchestrator | Tuesday 03 February 2026 06:35:08 +0000 (0:00:01.170) 0:40:21.358 ****** 2026-02-03 06:35:39.588527 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588535 | orchestrator | 2026-02-03 06:35:39.588543 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:35:39.588551 | orchestrator | Tuesday 03 February 2026 06:35:09 +0000 (0:00:01.228) 0:40:22.587 ****** 2026-02-03 06:35:39.588560 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588568 | orchestrator | 2026-02-03 06:35:39.588576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:35:39.588584 | orchestrator | Tuesday 03 February 2026 06:35:10 +0000 (0:00:01.245) 0:40:23.832 ****** 2026-02-03 06:35:39.588592 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588601 | orchestrator | 2026-02-03 06:35:39.588610 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:35:39.588620 | orchestrator | Tuesday 03 February 2026 06:35:11 +0000 (0:00:01.247) 0:40:25.080 ****** 2026-02-03 06:35:39.588629 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:35:39.588638 | orchestrator | 2026-02-03 06:35:39.588648 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:35:39.588657 | orchestrator | Tuesday 03 February 2026 06:35:13 +0000 (0:00:01.328) 0:40:26.408 ****** 2026-02-03 06:35:39.588667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:35:39.588676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:35:39.588699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:35:39.588709 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588718 | orchestrator | 2026-02-03 06:35:39.588727 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:35:39.588750 | orchestrator | Tuesday 03 February 2026 06:35:15 +0000 (0:00:01.918) 0:40:28.326 ****** 2026-02-03 06:35:39.588760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:35:39.588769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:35:39.588778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:35:39.588810 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588820 | orchestrator | 2026-02-03 06:35:39.588829 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:35:39.588838 | orchestrator | Tuesday 03 February 2026 06:35:17 +0000 (0:00:01.952) 0:40:30.279 ****** 2026-02-03 06:35:39.588847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:35:39.588856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:35:39.588865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:35:39.588874 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.588883 | orchestrator | 2026-02-03 06:35:39.588891 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:35:39.588900 | orchestrator | Tuesday 03 February 2026 06:35:19 +0000 (0:00:01.950) 0:40:32.230 ****** 2026-02-03 06:35:39.588909 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:35:39.588918 | orchestrator | 2026-02-03 06:35:39.588927 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:35:39.588936 | orchestrator | Tuesday 03 February 2026 06:35:20 +0000 (0:00:01.298) 0:40:33.528 ****** 2026-02-03 06:35:39.588945 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 06:35:39.588954 | orchestrator | 2026-02-03 06:35:39.588964 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:35:39.588973 | orchestrator | Tuesday 03 February 2026 06:35:21 +0000 (0:00:01.431) 0:40:34.960 ****** 2026-02-03 06:35:39.588990 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:35:39.588998 | orchestrator | 2026-02-03 06:35:39.589006 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-03 06:35:39.589014 | orchestrator | Tuesday 03 February 2026 06:35:23 +0000 (0:00:01.827) 0:40:36.787 ****** 2026-02-03 06:35:39.589022 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:35:39.589030 | orchestrator | 2026-02-03 06:35:39.589038 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:35:39.589046 | orchestrator | Tuesday 03 February 2026 06:35:24 +0000 (0:00:01.270) 0:40:38.058 ****** 2026-02-03 06:35:39.589054 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:35:39.589062 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:35:39.589071 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:35:39.589079 | orchestrator | 2026-02-03 06:35:39.589086 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-03 06:35:39.589094 | orchestrator | Tuesday 03 February 2026 06:35:26 +0000 (0:00:01.871) 0:40:39.929 ****** 2026-02-03 06:35:39.589102 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-03 06:35:39.589110 | orchestrator | 2026-02-03 06:35:39.589118 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-03 06:35:39.589126 | orchestrator | Tuesday 03 February 2026 06:35:28 +0000 (0:00:01.545) 0:40:41.475 ****** 2026-02-03 06:35:39.589134 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.589142 | orchestrator | 2026-02-03 06:35:39.589150 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-03 06:35:39.589158 | orchestrator | Tuesday 03 February 2026 06:35:29 +0000 (0:00:01.207) 0:40:42.683 ****** 2026-02-03 06:35:39.589166 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.589174 | orchestrator | 2026-02-03 06:35:39.589182 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-03 06:35:39.589190 | orchestrator | Tuesday 03 February 2026 06:35:30 +0000 (0:00:01.222) 0:40:43.906 ****** 2026-02-03 06:35:39.589198 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:35:39.589205 | orchestrator | 2026-02-03 06:35:39.589213 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-03 06:35:39.589221 | orchestrator | Tuesday 03 February 2026 06:35:32 +0000 (0:00:01.537) 0:40:45.443 ****** 2026-02-03 06:35:39.589229 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:35:39.589237 | orchestrator | 2026-02-03 06:35:39.589245 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-03 06:35:39.589253 | orchestrator | Tuesday 03 February 2026 06:35:33 +0000 (0:00:01.435) 0:40:46.879 ****** 2026-02-03 06:35:39.589261 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-03 06:35:39.589269 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-03 06:35:39.589277 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-03 06:35:39.589285 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-03 06:35:39.589293 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-03 06:35:39.589301 | orchestrator | 2026-02-03 06:35:39.589309 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-03 06:35:39.589317 | orchestrator | Tuesday 03 February 2026 06:35:36 +0000 (0:00:03.147) 0:40:50.027 ****** 2026-02-03 06:35:39.589325 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:35:39.589333 | orchestrator | 2026-02-03 06:35:39.589341 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-03 06:35:39.589355 | orchestrator | Tuesday 03 February 2026 06:35:38 +0000 (0:00:01.172) 0:40:51.200 ****** 2026-02-03 06:35:39.589364 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-03 06:35:39.589376 | orchestrator | 2026-02-03 06:35:39.589385 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-03 06:36:48.320364 | orchestrator | Tuesday 03 February 2026 06:35:39 +0000 (0:00:01.561) 0:40:52.762 ****** 2026-02-03 06:36:48.320476 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-03 06:36:48.320492 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-03 06:36:48.320504 | orchestrator | 2026-02-03 06:36:48.320515 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-03 06:36:48.320525 | orchestrator | Tuesday 03 February 2026 06:35:41 +0000 (0:00:01.907) 0:40:54.670 ****** 2026-02-03 06:36:48.320535 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:36:48.320545 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 06:36:48.320555 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 06:36:48.320565 | orchestrator | 2026-02-03 06:36:48.320575 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-03 06:36:48.320585 | orchestrator | Tuesday 03 February 2026 06:35:44 +0000 (0:00:03.357) 0:40:58.027 ****** 2026-02-03 06:36:48.320595 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-03 06:36:48.320605 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 06:36:48.320615 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:36:48.320625 | orchestrator | 2026-02-03 06:36:48.320635 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-03 06:36:48.320644 | orchestrator | Tuesday 03 February 2026 06:35:46 +0000 (0:00:02.138) 0:41:00.166 ****** 2026-02-03 06:36:48.320654 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.320664 | orchestrator | 2026-02-03 06:36:48.320673 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-03 06:36:48.320683 | orchestrator | Tuesday 03 February 2026 06:35:48 +0000 (0:00:01.295) 0:41:01.461 ****** 2026-02-03 06:36:48.320692 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.320702 | orchestrator | 2026-02-03 06:36:48.320712 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-03 06:36:48.320721 | orchestrator | Tuesday 03 February 2026 06:35:49 +0000 (0:00:01.186) 0:41:02.647 ****** 2026-02-03 06:36:48.320731 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.320741 | orchestrator | 2026-02-03 06:36:48.320750 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-03 06:36:48.320760 | orchestrator | Tuesday 03 February 2026 06:35:50 +0000 (0:00:01.232) 0:41:03.880 ****** 2026-02-03 06:36:48.320769 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-03 06:36:48.320822 | orchestrator | 2026-02-03 06:36:48.320833 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-03 06:36:48.320843 | orchestrator | Tuesday 03 February 2026 06:35:52 +0000 (0:00:01.549) 0:41:05.429 ****** 2026-02-03 06:36:48.320853 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:36:48.320863 | orchestrator | 2026-02-03 06:36:48.320873 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-03 06:36:48.320885 | orchestrator | Tuesday 03 February 2026 06:35:53 +0000 (0:00:01.552) 0:41:06.982 ****** 2026-02-03 06:36:48.320897 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:36:48.320909 | orchestrator | 2026-02-03 06:36:48.320920 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-03 06:36:48.320932 | orchestrator | Tuesday 03 February 2026 06:35:57 +0000 (0:00:03.783) 0:41:10.765 ****** 2026-02-03 06:36:48.320943 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-03 06:36:48.320954 | orchestrator | 2026-02-03 06:36:48.320966 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-03 06:36:48.320977 | orchestrator | Tuesday 03 February 2026 06:35:59 +0000 (0:00:01.526) 0:41:12.292 ****** 2026-02-03 06:36:48.320988 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:36:48.321021 | orchestrator | 2026-02-03 06:36:48.321031 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-03 06:36:48.321041 | orchestrator | Tuesday 03 February 2026 06:36:01 +0000 (0:00:02.101) 0:41:14.393 ****** 2026-02-03 06:36:48.321051 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:36:48.321060 | orchestrator | 2026-02-03 06:36:48.321070 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-03 06:36:48.321080 | orchestrator | Tuesday 03 February 2026 06:36:03 +0000 (0:00:02.054) 0:41:16.447 ****** 2026-02-03 06:36:48.321089 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:36:48.321099 | orchestrator | 2026-02-03 06:36:48.321109 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-03 06:36:48.321118 | orchestrator | Tuesday 03 February 2026 06:36:05 +0000 (0:00:02.383) 0:41:18.831 ****** 2026-02-03 06:36:48.321128 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.321138 | orchestrator | 2026-02-03 06:36:48.321147 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-03 06:36:48.321157 | orchestrator | Tuesday 03 February 2026 06:36:06 +0000 (0:00:01.183) 0:41:20.014 ****** 2026-02-03 06:36:48.321167 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.321176 | orchestrator | 2026-02-03 06:36:48.321186 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-03 06:36:48.321195 | orchestrator | Tuesday 03 February 2026 06:36:08 +0000 (0:00:01.252) 0:41:21.266 ****** 2026-02-03 06:36:48.321205 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 06:36:48.321215 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-03 06:36:48.321225 | orchestrator | 2026-02-03 06:36:48.321234 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-03 06:36:48.321244 | orchestrator | Tuesday 03 February 2026 06:36:09 +0000 (0:00:01.918) 0:41:23.185 ****** 2026-02-03 06:36:48.321253 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 06:36:48.321263 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-03 06:36:48.321273 | orchestrator | 2026-02-03 06:36:48.321296 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-03 06:36:48.321307 | orchestrator | Tuesday 03 February 2026 06:36:13 +0000 (0:00:03.026) 0:41:26.211 ****** 2026-02-03 06:36:48.321316 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-03 06:36:48.321345 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-03 06:36:48.321356 | orchestrator | 2026-02-03 06:36:48.321366 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-03 06:36:48.321376 | orchestrator | Tuesday 03 February 2026 06:36:17 +0000 (0:00:04.710) 0:41:30.921 ****** 2026-02-03 06:36:48.321386 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.321396 | orchestrator | 2026-02-03 06:36:48.321406 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-03 06:36:48.321415 | orchestrator | Tuesday 03 February 2026 06:36:19 +0000 (0:00:01.308) 0:41:32.230 ****** 2026-02-03 06:36:48.321425 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.321435 | orchestrator | 2026-02-03 06:36:48.321444 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-03 06:36:48.321459 | orchestrator | Tuesday 03 February 2026 06:36:20 +0000 (0:00:01.363) 0:41:33.593 ****** 2026-02-03 06:36:48.321475 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.321492 | orchestrator | 2026-02-03 06:36:48.321508 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-03 06:36:48.321524 | orchestrator | Tuesday 03 February 2026 06:36:22 +0000 (0:00:01.922) 0:41:35.516 ****** 2026-02-03 06:36:48.321543 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.321560 | orchestrator | 2026-02-03 06:36:48.321574 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-03 06:36:48.321583 | orchestrator | Tuesday 03 February 2026 06:36:23 +0000 (0:00:01.190) 0:41:36.707 ****** 2026-02-03 06:36:48.321593 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:36:48.321603 | orchestrator | 2026-02-03 06:36:48.321612 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-03 06:36:48.321632 | orchestrator | Tuesday 03 February 2026 06:36:24 +0000 (0:00:01.239) 0:41:37.947 ****** 2026-02-03 06:36:48.321641 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-03 06:36:48.321652 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-03 06:36:48.321662 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:36:48.321672 | orchestrator | 2026-02-03 06:36:48.321681 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-03 06:36:48.321691 | orchestrator | 2026-02-03 06:36:48.321700 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:36:48.321710 | orchestrator | Tuesday 03 February 2026 06:36:33 +0000 (0:00:08.457) 0:41:46.405 ****** 2026-02-03 06:36:48.321802 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-03 06:36:48.321822 | orchestrator | 2026-02-03 06:36:48.321839 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:36:48.321857 | orchestrator | Tuesday 03 February 2026 06:36:34 +0000 (0:00:01.207) 0:41:47.612 ****** 2026-02-03 06:36:48.321876 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:36:48.321888 | orchestrator | 2026-02-03 06:36:48.321897 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:36:48.321907 | orchestrator | Tuesday 03 February 2026 06:36:36 +0000 (0:00:01.595) 0:41:49.208 ****** 2026-02-03 06:36:48.321916 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:36:48.321926 | orchestrator | 2026-02-03 06:36:48.321935 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:36:48.321944 | orchestrator | Tuesday 03 February 2026 06:36:37 +0000 (0:00:01.215) 0:41:50.424 ****** 2026-02-03 06:36:48.321954 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:36:48.321963 | orchestrator | 2026-02-03 06:36:48.321973 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:36:48.321982 | orchestrator | Tuesday 03 February 2026 06:36:38 +0000 (0:00:01.576) 0:41:52.001 ****** 2026-02-03 06:36:48.321992 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:36:48.322001 | orchestrator | 2026-02-03 06:36:48.322011 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:36:48.322080 | orchestrator | Tuesday 03 February 2026 06:36:40 +0000 (0:00:01.196) 0:41:53.197 ****** 2026-02-03 06:36:48.322091 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:36:48.322100 | orchestrator | 2026-02-03 06:36:48.322110 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:36:48.322120 | orchestrator | Tuesday 03 February 2026 06:36:41 +0000 (0:00:01.262) 0:41:54.460 ****** 2026-02-03 06:36:48.322130 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:36:48.322139 | orchestrator | 2026-02-03 06:36:48.322149 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:36:48.322159 | orchestrator | Tuesday 03 February 2026 06:36:42 +0000 (0:00:01.347) 0:41:55.807 ****** 2026-02-03 06:36:48.322169 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:36:48.322178 | orchestrator | 2026-02-03 06:36:48.322188 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:36:48.322198 | orchestrator | Tuesday 03 February 2026 06:36:43 +0000 (0:00:01.213) 0:41:57.021 ****** 2026-02-03 06:36:48.322207 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:36:48.322217 | orchestrator | 2026-02-03 06:36:48.322227 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:36:48.322236 | orchestrator | Tuesday 03 February 2026 06:36:45 +0000 (0:00:01.253) 0:41:58.275 ****** 2026-02-03 06:36:48.322246 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:36:48.322256 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:36:48.322265 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:36:48.322283 | orchestrator | 2026-02-03 06:36:48.322297 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:36:48.322323 | orchestrator | Tuesday 03 February 2026 06:36:46 +0000 (0:00:01.897) 0:42:00.172 ****** 2026-02-03 06:36:48.322340 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:36:48.322355 | orchestrator | 2026-02-03 06:36:48.322371 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:36:48.322402 | orchestrator | Tuesday 03 February 2026 06:36:48 +0000 (0:00:01.324) 0:42:01.496 ****** 2026-02-03 06:37:16.065328 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:37:16.065439 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:37:16.065454 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:37:16.065466 | orchestrator | 2026-02-03 06:37:16.065479 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:37:16.065491 | orchestrator | Tuesday 03 February 2026 06:36:51 +0000 (0:00:03.144) 0:42:04.641 ****** 2026-02-03 06:37:16.065504 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 06:37:16.065516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 06:37:16.065527 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 06:37:16.065538 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.065549 | orchestrator | 2026-02-03 06:37:16.065561 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:37:16.065572 | orchestrator | Tuesday 03 February 2026 06:36:52 +0000 (0:00:01.523) 0:42:06.164 ****** 2026-02-03 06:37:16.065584 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:37:16.065598 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:37:16.065610 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:37:16.065621 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.065632 | orchestrator | 2026-02-03 06:37:16.065643 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:37:16.065653 | orchestrator | Tuesday 03 February 2026 06:36:54 +0000 (0:00:01.811) 0:42:07.976 ****** 2026-02-03 06:37:16.065667 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:16.065682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:16.065693 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:16.065729 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.065741 | orchestrator | 2026-02-03 06:37:16.065752 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:37:16.065764 | orchestrator | Tuesday 03 February 2026 06:36:56 +0000 (0:00:01.266) 0:42:09.242 ****** 2026-02-03 06:37:16.065830 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:36:48.872657', 'end': '2026-02-03 06:36:48.915891', 'delta': '0:00:00.043234', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:37:16.065865 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:36:49.444944', 'end': '2026-02-03 06:36:49.483761', 'delta': '0:00:00.038817', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:37:16.065881 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:36:50.009935', 'end': '2026-02-03 06:36:50.051463', 'delta': '0:00:00.041528', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:37:16.065894 | orchestrator | 2026-02-03 06:37:16.065908 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:37:16.065921 | orchestrator | Tuesday 03 February 2026 06:36:57 +0000 (0:00:01.313) 0:42:10.555 ****** 2026-02-03 06:37:16.065935 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:37:16.065948 | orchestrator | 2026-02-03 06:37:16.065961 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:37:16.065974 | orchestrator | Tuesday 03 February 2026 06:36:58 +0000 (0:00:01.329) 0:42:11.885 ****** 2026-02-03 06:37:16.065987 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.065999 | orchestrator | 2026-02-03 06:37:16.066013 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:37:16.066088 | orchestrator | Tuesday 03 February 2026 06:37:00 +0000 (0:00:01.312) 0:42:13.197 ****** 2026-02-03 06:37:16.066101 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:37:16.066114 | orchestrator | 2026-02-03 06:37:16.066127 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:37:16.066140 | orchestrator | Tuesday 03 February 2026 06:37:01 +0000 (0:00:01.628) 0:42:14.826 ****** 2026-02-03 06:37:16.066165 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:37:16.066178 | orchestrator | 2026-02-03 06:37:16.066190 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:37:16.066204 | orchestrator | Tuesday 03 February 2026 06:37:04 +0000 (0:00:03.023) 0:42:17.850 ****** 2026-02-03 06:37:16.066217 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:37:16.066231 | orchestrator | 2026-02-03 06:37:16.066242 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:37:16.066253 | orchestrator | Tuesday 03 February 2026 06:37:06 +0000 (0:00:01.647) 0:42:19.498 ****** 2026-02-03 06:37:16.066264 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.066275 | orchestrator | 2026-02-03 06:37:16.066286 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:37:16.066305 | orchestrator | Tuesday 03 February 2026 06:37:07 +0000 (0:00:01.246) 0:42:20.745 ****** 2026-02-03 06:37:16.066324 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.066345 | orchestrator | 2026-02-03 06:37:16.066377 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:37:16.066395 | orchestrator | Tuesday 03 February 2026 06:37:08 +0000 (0:00:01.339) 0:42:22.084 ****** 2026-02-03 06:37:16.066414 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.066433 | orchestrator | 2026-02-03 06:37:16.066450 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:37:16.066468 | orchestrator | Tuesday 03 February 2026 06:37:10 +0000 (0:00:01.179) 0:42:23.263 ****** 2026-02-03 06:37:16.066485 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.066504 | orchestrator | 2026-02-03 06:37:16.066524 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:37:16.066543 | orchestrator | Tuesday 03 February 2026 06:37:11 +0000 (0:00:01.183) 0:42:24.447 ****** 2026-02-03 06:37:16.066563 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:37:16.066584 | orchestrator | 2026-02-03 06:37:16.066604 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:37:16.066620 | orchestrator | Tuesday 03 February 2026 06:37:12 +0000 (0:00:01.208) 0:42:25.656 ****** 2026-02-03 06:37:16.066631 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.066642 | orchestrator | 2026-02-03 06:37:16.066653 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:37:16.066664 | orchestrator | Tuesday 03 February 2026 06:37:13 +0000 (0:00:01.160) 0:42:26.816 ****** 2026-02-03 06:37:16.066674 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:37:16.066685 | orchestrator | 2026-02-03 06:37:16.066705 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:37:16.066716 | orchestrator | Tuesday 03 February 2026 06:37:14 +0000 (0:00:01.217) 0:42:28.034 ****** 2026-02-03 06:37:16.066727 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:16.066738 | orchestrator | 2026-02-03 06:37:16.066759 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:37:17.568925 | orchestrator | Tuesday 03 February 2026 06:37:16 +0000 (0:00:01.204) 0:42:29.239 ****** 2026-02-03 06:37:17.569053 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:37:17.569080 | orchestrator | 2026-02-03 06:37:17.569101 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:37:17.569122 | orchestrator | Tuesday 03 February 2026 06:37:17 +0000 (0:00:01.232) 0:42:30.471 ****** 2026-02-03 06:37:17.569145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:37:17.569175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'uuids': ['ee84a40a-c8f5-4363-8b92-865eb14b3049'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt']}})  2026-02-03 06:37:17.569230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15b94581', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:37:17.569254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362']}})  2026-02-03 06:37:17.569277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:37:17.569298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:37:17.569362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:37:17.569407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:37:17.569440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun', 'dm-uuid-CRYPT-LUKS2-1805b057808e47489bd25959cb85c8e5-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:37:17.569462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:37:17.569484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'uuids': ['1805b057-808e-4748-9bd2-5959cb85c8e5'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun']}})  2026-02-03 06:37:17.569507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291']}})  2026-02-03 06:37:17.569527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:37:17.569579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9ac79520', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:37:19.075760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:37:19.075884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:37:19.075896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt', 'dm-uuid-CRYPT-LUKS2-ee84a40ac8f543638b92865eb14b3049-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:37:19.075905 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:19.075913 | orchestrator | 2026-02-03 06:37:19.075921 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:37:19.075929 | orchestrator | Tuesday 03 February 2026 06:37:18 +0000 (0:00:01.514) 0:42:31.985 ****** 2026-02-03 06:37:19.075952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:19.075961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'uuids': ['ee84a40a-c8f5-4363-8b92-865eb14b3049'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:19.075985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15b94581', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:19.076008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:19.076017 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:19.076023 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:19.076034 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:19.076046 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:19.076058 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun', 'dm-uuid-CRYPT-LUKS2-1805b057808e47489bd25959cb85c8e5-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:24.812689 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:24.812889 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'uuids': ['1805b057-808e-4748-9bd2-5959cb85c8e5'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:24.812946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:24.812986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:24.813024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9ac79520', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:24.813039 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:24.813056 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:24.813077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt', 'dm-uuid-CRYPT-LUKS2-ee84a40ac8f543638b92865eb14b3049-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:37:24.813090 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:37:24.813103 | orchestrator | 2026-02-03 06:37:24.813115 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:37:24.813128 | orchestrator | Tuesday 03 February 2026 06:37:20 +0000 (0:00:01.557) 0:42:33.543 ****** 2026-02-03 06:37:24.813139 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:37:24.813151 | orchestrator | 2026-02-03 06:37:24.813162 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:37:24.813173 | orchestrator | Tuesday 03 February 2026 06:37:21 +0000 (0:00:01.630) 0:42:35.174 ****** 2026-02-03 06:37:24.813191 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:37:24.813210 | orchestrator | 2026-02-03 06:37:24.813228 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:37:24.813246 | orchestrator | Tuesday 03 February 2026 06:37:23 +0000 (0:00:01.170) 0:42:36.344 ****** 2026-02-03 06:37:24.813264 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:37:24.813283 | orchestrator | 2026-02-03 06:37:24.813302 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:37:24.813335 | orchestrator | Tuesday 03 February 2026 06:37:24 +0000 (0:00:01.643) 0:42:37.988 ****** 2026-02-03 06:38:09.572483 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.572603 | orchestrator | 2026-02-03 06:38:09.572620 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:38:09.572633 | orchestrator | Tuesday 03 February 2026 06:37:26 +0000 (0:00:01.283) 0:42:39.271 ****** 2026-02-03 06:38:09.572645 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.572656 | orchestrator | 2026-02-03 06:38:09.572667 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:38:09.572679 | orchestrator | Tuesday 03 February 2026 06:37:27 +0000 (0:00:01.337) 0:42:40.609 ****** 2026-02-03 06:38:09.572690 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.572701 | orchestrator | 2026-02-03 06:38:09.572711 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:38:09.572722 | orchestrator | Tuesday 03 February 2026 06:37:28 +0000 (0:00:01.219) 0:42:41.828 ****** 2026-02-03 06:38:09.572734 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-03 06:38:09.572745 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-03 06:38:09.572756 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-03 06:38:09.572828 | orchestrator | 2026-02-03 06:38:09.572843 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:38:09.572854 | orchestrator | Tuesday 03 February 2026 06:37:30 +0000 (0:00:01.806) 0:42:43.635 ****** 2026-02-03 06:38:09.572865 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 06:38:09.572876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 06:38:09.572888 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 06:38:09.572924 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.572936 | orchestrator | 2026-02-03 06:38:09.572946 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:38:09.572958 | orchestrator | Tuesday 03 February 2026 06:37:31 +0000 (0:00:01.545) 0:42:45.180 ****** 2026-02-03 06:38:09.572969 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-03 06:38:09.572981 | orchestrator | 2026-02-03 06:38:09.572993 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:38:09.573005 | orchestrator | Tuesday 03 February 2026 06:37:33 +0000 (0:00:01.314) 0:42:46.494 ****** 2026-02-03 06:38:09.573019 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.573031 | orchestrator | 2026-02-03 06:38:09.573045 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:38:09.573058 | orchestrator | Tuesday 03 February 2026 06:37:34 +0000 (0:00:01.254) 0:42:47.749 ****** 2026-02-03 06:38:09.573071 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.573083 | orchestrator | 2026-02-03 06:38:09.573096 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:38:09.573109 | orchestrator | Tuesday 03 February 2026 06:37:35 +0000 (0:00:01.307) 0:42:49.057 ****** 2026-02-03 06:38:09.573121 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.573134 | orchestrator | 2026-02-03 06:38:09.573161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:38:09.573174 | orchestrator | Tuesday 03 February 2026 06:37:37 +0000 (0:00:01.188) 0:42:50.245 ****** 2026-02-03 06:38:09.573187 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:09.573201 | orchestrator | 2026-02-03 06:38:09.573213 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:38:09.573225 | orchestrator | Tuesday 03 February 2026 06:37:38 +0000 (0:00:01.409) 0:42:51.655 ****** 2026-02-03 06:38:09.573237 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:38:09.573250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:38:09.573263 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:38:09.573275 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.573288 | orchestrator | 2026-02-03 06:38:09.573301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:38:09.573314 | orchestrator | Tuesday 03 February 2026 06:37:39 +0000 (0:00:01.506) 0:42:53.162 ****** 2026-02-03 06:38:09.573327 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:38:09.573340 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:38:09.573352 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:38:09.573365 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.573378 | orchestrator | 2026-02-03 06:38:09.573391 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:38:09.573403 | orchestrator | Tuesday 03 February 2026 06:37:41 +0000 (0:00:01.550) 0:42:54.712 ****** 2026-02-03 06:38:09.573417 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:38:09.573430 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:38:09.573441 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:38:09.573452 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.573463 | orchestrator | 2026-02-03 06:38:09.573474 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:38:09.573485 | orchestrator | Tuesday 03 February 2026 06:37:43 +0000 (0:00:01.528) 0:42:56.241 ****** 2026-02-03 06:38:09.573560 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:09.573572 | orchestrator | 2026-02-03 06:38:09.573583 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:38:09.573594 | orchestrator | Tuesday 03 February 2026 06:37:44 +0000 (0:00:01.219) 0:42:57.461 ****** 2026-02-03 06:38:09.573617 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 06:38:09.573628 | orchestrator | 2026-02-03 06:38:09.573639 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:38:09.573650 | orchestrator | Tuesday 03 February 2026 06:37:45 +0000 (0:00:01.477) 0:42:58.939 ****** 2026-02-03 06:38:09.573679 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:38:09.573691 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:38:09.573702 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:38:09.573713 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:38:09.573724 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-03 06:38:09.573735 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:38:09.573746 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:38:09.573756 | orchestrator | 2026-02-03 06:38:09.573791 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:38:09.573811 | orchestrator | Tuesday 03 February 2026 06:37:47 +0000 (0:00:01.949) 0:43:00.888 ****** 2026-02-03 06:38:09.573831 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:38:09.573849 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:38:09.573869 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:38:09.573881 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:38:09.573892 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-03 06:38:09.573903 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:38:09.573913 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:38:09.573924 | orchestrator | 2026-02-03 06:38:09.573935 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-03 06:38:09.573945 | orchestrator | Tuesday 03 February 2026 06:37:50 +0000 (0:00:02.615) 0:43:03.504 ****** 2026-02-03 06:38:09.573956 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:09.573967 | orchestrator | 2026-02-03 06:38:09.573978 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-03 06:38:09.573988 | orchestrator | Tuesday 03 February 2026 06:37:51 +0000 (0:00:01.193) 0:43:04.698 ****** 2026-02-03 06:38:09.573999 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:09.574010 | orchestrator | 2026-02-03 06:38:09.574087 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-03 06:38:09.574106 | orchestrator | Tuesday 03 February 2026 06:37:52 +0000 (0:00:00.818) 0:43:05.516 ****** 2026-02-03 06:38:09.574118 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:09.574128 | orchestrator | 2026-02-03 06:38:09.574139 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-03 06:38:09.574149 | orchestrator | Tuesday 03 February 2026 06:37:53 +0000 (0:00:00.922) 0:43:06.438 ****** 2026-02-03 06:38:09.574160 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-03 06:38:09.574179 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-03 06:38:09.574191 | orchestrator | 2026-02-03 06:38:09.574201 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:38:09.574212 | orchestrator | Tuesday 03 February 2026 06:37:57 +0000 (0:00:04.065) 0:43:10.504 ****** 2026-02-03 06:38:09.574223 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-03 06:38:09.574234 | orchestrator | 2026-02-03 06:38:09.574244 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:38:09.574264 | orchestrator | Tuesday 03 February 2026 06:37:58 +0000 (0:00:01.391) 0:43:11.895 ****** 2026-02-03 06:38:09.574275 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-03 06:38:09.574286 | orchestrator | 2026-02-03 06:38:09.574297 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:38:09.574307 | orchestrator | Tuesday 03 February 2026 06:37:59 +0000 (0:00:01.221) 0:43:13.117 ****** 2026-02-03 06:38:09.574318 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.574399 | orchestrator | 2026-02-03 06:38:09.574411 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:38:09.574422 | orchestrator | Tuesday 03 February 2026 06:38:01 +0000 (0:00:01.233) 0:43:14.351 ****** 2026-02-03 06:38:09.574433 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:09.574444 | orchestrator | 2026-02-03 06:38:09.574455 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:38:09.574466 | orchestrator | Tuesday 03 February 2026 06:38:02 +0000 (0:00:01.582) 0:43:15.933 ****** 2026-02-03 06:38:09.574477 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:09.574487 | orchestrator | 2026-02-03 06:38:09.574498 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:38:09.574509 | orchestrator | Tuesday 03 February 2026 06:38:04 +0000 (0:00:01.619) 0:43:17.553 ****** 2026-02-03 06:38:09.574520 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:09.574531 | orchestrator | 2026-02-03 06:38:09.574541 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:38:09.574552 | orchestrator | Tuesday 03 February 2026 06:38:06 +0000 (0:00:01.654) 0:43:19.207 ****** 2026-02-03 06:38:09.574563 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.574574 | orchestrator | 2026-02-03 06:38:09.574585 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:38:09.574596 | orchestrator | Tuesday 03 February 2026 06:38:07 +0000 (0:00:01.175) 0:43:20.382 ****** 2026-02-03 06:38:09.574607 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.574617 | orchestrator | 2026-02-03 06:38:09.574628 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:38:09.574639 | orchestrator | Tuesday 03 February 2026 06:38:08 +0000 (0:00:01.170) 0:43:21.553 ****** 2026-02-03 06:38:09.574650 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:09.574661 | orchestrator | 2026-02-03 06:38:09.574682 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:38:52.104822 | orchestrator | Tuesday 03 February 2026 06:38:09 +0000 (0:00:01.195) 0:43:22.748 ****** 2026-02-03 06:38:52.104944 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.104962 | orchestrator | 2026-02-03 06:38:52.104976 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:38:52.104988 | orchestrator | Tuesday 03 February 2026 06:38:11 +0000 (0:00:01.685) 0:43:24.433 ****** 2026-02-03 06:38:52.105000 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.105011 | orchestrator | 2026-02-03 06:38:52.105023 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:38:52.105034 | orchestrator | Tuesday 03 February 2026 06:38:12 +0000 (0:00:01.665) 0:43:26.099 ****** 2026-02-03 06:38:52.105045 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105057 | orchestrator | 2026-02-03 06:38:52.105069 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:38:52.105080 | orchestrator | Tuesday 03 February 2026 06:38:13 +0000 (0:00:00.884) 0:43:26.984 ****** 2026-02-03 06:38:52.105091 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105102 | orchestrator | 2026-02-03 06:38:52.105113 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:38:52.105126 | orchestrator | Tuesday 03 February 2026 06:38:14 +0000 (0:00:00.813) 0:43:27.798 ****** 2026-02-03 06:38:52.105145 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.105163 | orchestrator | 2026-02-03 06:38:52.105180 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:38:52.105230 | orchestrator | Tuesday 03 February 2026 06:38:15 +0000 (0:00:00.850) 0:43:28.648 ****** 2026-02-03 06:38:52.105246 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.105258 | orchestrator | 2026-02-03 06:38:52.105269 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:38:52.105280 | orchestrator | Tuesday 03 February 2026 06:38:16 +0000 (0:00:00.808) 0:43:29.457 ****** 2026-02-03 06:38:52.105291 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.105304 | orchestrator | 2026-02-03 06:38:52.105317 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:38:52.105330 | orchestrator | Tuesday 03 February 2026 06:38:17 +0000 (0:00:00.803) 0:43:30.261 ****** 2026-02-03 06:38:52.105343 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105355 | orchestrator | 2026-02-03 06:38:52.105368 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:38:52.105381 | orchestrator | Tuesday 03 February 2026 06:38:17 +0000 (0:00:00.816) 0:43:31.078 ****** 2026-02-03 06:38:52.105394 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105407 | orchestrator | 2026-02-03 06:38:52.105420 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:38:52.105433 | orchestrator | Tuesday 03 February 2026 06:38:18 +0000 (0:00:00.843) 0:43:31.921 ****** 2026-02-03 06:38:52.105445 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105458 | orchestrator | 2026-02-03 06:38:52.105471 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:38:52.105483 | orchestrator | Tuesday 03 February 2026 06:38:19 +0000 (0:00:00.825) 0:43:32.747 ****** 2026-02-03 06:38:52.105512 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.105525 | orchestrator | 2026-02-03 06:38:52.105538 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:38:52.105550 | orchestrator | Tuesday 03 February 2026 06:38:20 +0000 (0:00:00.927) 0:43:33.674 ****** 2026-02-03 06:38:52.105563 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.105576 | orchestrator | 2026-02-03 06:38:52.105589 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:38:52.105601 | orchestrator | Tuesday 03 February 2026 06:38:21 +0000 (0:00:00.830) 0:43:34.505 ****** 2026-02-03 06:38:52.105613 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105626 | orchestrator | 2026-02-03 06:38:52.105638 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:38:52.105651 | orchestrator | Tuesday 03 February 2026 06:38:22 +0000 (0:00:00.803) 0:43:35.309 ****** 2026-02-03 06:38:52.105664 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105675 | orchestrator | 2026-02-03 06:38:52.105686 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:38:52.105697 | orchestrator | Tuesday 03 February 2026 06:38:22 +0000 (0:00:00.820) 0:43:36.129 ****** 2026-02-03 06:38:52.105708 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105719 | orchestrator | 2026-02-03 06:38:52.105730 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:38:52.105740 | orchestrator | Tuesday 03 February 2026 06:38:23 +0000 (0:00:00.879) 0:43:37.008 ****** 2026-02-03 06:38:52.105751 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105782 | orchestrator | 2026-02-03 06:38:52.105794 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:38:52.105805 | orchestrator | Tuesday 03 February 2026 06:38:24 +0000 (0:00:00.941) 0:43:37.950 ****** 2026-02-03 06:38:52.105815 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105826 | orchestrator | 2026-02-03 06:38:52.105837 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:38:52.105848 | orchestrator | Tuesday 03 February 2026 06:38:25 +0000 (0:00:00.867) 0:43:38.817 ****** 2026-02-03 06:38:52.105859 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105870 | orchestrator | 2026-02-03 06:38:52.105881 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:38:52.105904 | orchestrator | Tuesday 03 February 2026 06:38:26 +0000 (0:00:00.826) 0:43:39.644 ****** 2026-02-03 06:38:52.105915 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105926 | orchestrator | 2026-02-03 06:38:52.105937 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:38:52.105948 | orchestrator | Tuesday 03 February 2026 06:38:27 +0000 (0:00:00.866) 0:43:40.511 ****** 2026-02-03 06:38:52.105959 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.105970 | orchestrator | 2026-02-03 06:38:52.105981 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:38:52.105992 | orchestrator | Tuesday 03 February 2026 06:38:28 +0000 (0:00:00.806) 0:43:41.317 ****** 2026-02-03 06:38:52.106077 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106092 | orchestrator | 2026-02-03 06:38:52.106103 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:38:52.106114 | orchestrator | Tuesday 03 February 2026 06:38:28 +0000 (0:00:00.842) 0:43:42.160 ****** 2026-02-03 06:38:52.106125 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106136 | orchestrator | 2026-02-03 06:38:52.106147 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:38:52.106158 | orchestrator | Tuesday 03 February 2026 06:38:29 +0000 (0:00:00.783) 0:43:42.944 ****** 2026-02-03 06:38:52.106169 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106180 | orchestrator | 2026-02-03 06:38:52.106190 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:38:52.106201 | orchestrator | Tuesday 03 February 2026 06:38:30 +0000 (0:00:00.861) 0:43:43.806 ****** 2026-02-03 06:38:52.106212 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106223 | orchestrator | 2026-02-03 06:38:52.106234 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:38:52.106245 | orchestrator | Tuesday 03 February 2026 06:38:31 +0000 (0:00:00.808) 0:43:44.615 ****** 2026-02-03 06:38:52.106256 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.106267 | orchestrator | 2026-02-03 06:38:52.106278 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:38:52.106289 | orchestrator | Tuesday 03 February 2026 06:38:33 +0000 (0:00:01.660) 0:43:46.275 ****** 2026-02-03 06:38:52.106300 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.106311 | orchestrator | 2026-02-03 06:38:52.106322 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:38:52.106333 | orchestrator | Tuesday 03 February 2026 06:38:35 +0000 (0:00:02.051) 0:43:48.327 ****** 2026-02-03 06:38:52.106344 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-03 06:38:52.106356 | orchestrator | 2026-02-03 06:38:52.106368 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:38:52.106378 | orchestrator | Tuesday 03 February 2026 06:38:36 +0000 (0:00:01.409) 0:43:49.737 ****** 2026-02-03 06:38:52.106389 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106400 | orchestrator | 2026-02-03 06:38:52.106411 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:38:52.106422 | orchestrator | Tuesday 03 February 2026 06:38:37 +0000 (0:00:01.265) 0:43:51.002 ****** 2026-02-03 06:38:52.106433 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106444 | orchestrator | 2026-02-03 06:38:52.106455 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:38:52.106465 | orchestrator | Tuesday 03 February 2026 06:38:39 +0000 (0:00:01.204) 0:43:52.207 ****** 2026-02-03 06:38:52.106477 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:38:52.106488 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:38:52.106499 | orchestrator | 2026-02-03 06:38:52.106517 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:38:52.106536 | orchestrator | Tuesday 03 February 2026 06:38:40 +0000 (0:00:01.945) 0:43:54.153 ****** 2026-02-03 06:38:52.106547 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.106557 | orchestrator | 2026-02-03 06:38:52.106568 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:38:52.106579 | orchestrator | Tuesday 03 February 2026 06:38:42 +0000 (0:00:01.525) 0:43:55.679 ****** 2026-02-03 06:38:52.106590 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106601 | orchestrator | 2026-02-03 06:38:52.106612 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:38:52.106623 | orchestrator | Tuesday 03 February 2026 06:38:43 +0000 (0:00:01.224) 0:43:56.903 ****** 2026-02-03 06:38:52.106634 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106645 | orchestrator | 2026-02-03 06:38:52.106656 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:38:52.106666 | orchestrator | Tuesday 03 February 2026 06:38:44 +0000 (0:00:00.823) 0:43:57.726 ****** 2026-02-03 06:38:52.106677 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106688 | orchestrator | 2026-02-03 06:38:52.106699 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:38:52.106710 | orchestrator | Tuesday 03 February 2026 06:38:45 +0000 (0:00:00.803) 0:43:58.530 ****** 2026-02-03 06:38:52.106721 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-03 06:38:52.106732 | orchestrator | 2026-02-03 06:38:52.106743 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:38:52.106754 | orchestrator | Tuesday 03 February 2026 06:38:46 +0000 (0:00:01.165) 0:43:59.696 ****** 2026-02-03 06:38:52.106794 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:38:52.106806 | orchestrator | 2026-02-03 06:38:52.106816 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:38:52.106827 | orchestrator | Tuesday 03 February 2026 06:38:48 +0000 (0:00:01.748) 0:44:01.444 ****** 2026-02-03 06:38:52.106839 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:38:52.106850 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:38:52.106861 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:38:52.106872 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106882 | orchestrator | 2026-02-03 06:38:52.106893 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:38:52.106904 | orchestrator | Tuesday 03 February 2026 06:38:49 +0000 (0:00:01.211) 0:44:02.656 ****** 2026-02-03 06:38:52.106915 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:38:52.106926 | orchestrator | 2026-02-03 06:38:52.106937 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:38:52.106948 | orchestrator | Tuesday 03 February 2026 06:38:50 +0000 (0:00:01.300) 0:44:03.957 ****** 2026-02-03 06:38:52.106967 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.254879 | orchestrator | 2026-02-03 06:39:37.254996 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:39:37.255014 | orchestrator | Tuesday 03 February 2026 06:38:52 +0000 (0:00:01.323) 0:44:05.281 ****** 2026-02-03 06:39:37.255025 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255037 | orchestrator | 2026-02-03 06:39:37.255049 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:39:37.255061 | orchestrator | Tuesday 03 February 2026 06:38:53 +0000 (0:00:01.236) 0:44:06.517 ****** 2026-02-03 06:39:37.255073 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255085 | orchestrator | 2026-02-03 06:39:37.255095 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:39:37.255107 | orchestrator | Tuesday 03 February 2026 06:38:54 +0000 (0:00:01.229) 0:44:07.747 ****** 2026-02-03 06:39:37.255115 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255122 | orchestrator | 2026-02-03 06:39:37.255148 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:39:37.255155 | orchestrator | Tuesday 03 February 2026 06:38:55 +0000 (0:00:00.829) 0:44:08.576 ****** 2026-02-03 06:39:37.255162 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:39:37.255170 | orchestrator | 2026-02-03 06:39:37.255177 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:39:37.255184 | orchestrator | Tuesday 03 February 2026 06:38:57 +0000 (0:00:02.276) 0:44:10.853 ****** 2026-02-03 06:39:37.255191 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:39:37.255197 | orchestrator | 2026-02-03 06:39:37.255204 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:39:37.255211 | orchestrator | Tuesday 03 February 2026 06:38:58 +0000 (0:00:00.854) 0:44:11.707 ****** 2026-02-03 06:39:37.255217 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-03 06:39:37.255224 | orchestrator | 2026-02-03 06:39:37.255230 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:39:37.255237 | orchestrator | Tuesday 03 February 2026 06:38:59 +0000 (0:00:01.147) 0:44:12.855 ****** 2026-02-03 06:39:37.255244 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255250 | orchestrator | 2026-02-03 06:39:37.255257 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:39:37.255263 | orchestrator | Tuesday 03 February 2026 06:39:00 +0000 (0:00:01.235) 0:44:14.090 ****** 2026-02-03 06:39:37.255270 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255277 | orchestrator | 2026-02-03 06:39:37.255284 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:39:37.255290 | orchestrator | Tuesday 03 February 2026 06:39:02 +0000 (0:00:01.218) 0:44:15.309 ****** 2026-02-03 06:39:37.255297 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255303 | orchestrator | 2026-02-03 06:39:37.255310 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:39:37.255317 | orchestrator | Tuesday 03 February 2026 06:39:03 +0000 (0:00:01.209) 0:44:16.518 ****** 2026-02-03 06:39:37.255335 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255342 | orchestrator | 2026-02-03 06:39:37.255348 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:39:37.255369 | orchestrator | Tuesday 03 February 2026 06:39:05 +0000 (0:00:01.776) 0:44:18.295 ****** 2026-02-03 06:39:37.255385 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255393 | orchestrator | 2026-02-03 06:39:37.255401 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:39:37.255409 | orchestrator | Tuesday 03 February 2026 06:39:06 +0000 (0:00:01.258) 0:44:19.554 ****** 2026-02-03 06:39:37.255417 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255425 | orchestrator | 2026-02-03 06:39:37.255433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:39:37.255441 | orchestrator | Tuesday 03 February 2026 06:39:07 +0000 (0:00:01.283) 0:44:20.838 ****** 2026-02-03 06:39:37.255449 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255456 | orchestrator | 2026-02-03 06:39:37.255464 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:39:37.255472 | orchestrator | Tuesday 03 February 2026 06:39:08 +0000 (0:00:01.152) 0:44:21.991 ****** 2026-02-03 06:39:37.255479 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255487 | orchestrator | 2026-02-03 06:39:37.255495 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:39:37.255503 | orchestrator | Tuesday 03 February 2026 06:39:09 +0000 (0:00:01.153) 0:44:23.145 ****** 2026-02-03 06:39:37.255510 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:39:37.255518 | orchestrator | 2026-02-03 06:39:37.255526 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:39:37.255533 | orchestrator | Tuesday 03 February 2026 06:39:10 +0000 (0:00:00.880) 0:44:24.026 ****** 2026-02-03 06:39:37.255542 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-03 06:39:37.255558 | orchestrator | 2026-02-03 06:39:37.255566 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:39:37.255573 | orchestrator | Tuesday 03 February 2026 06:39:12 +0000 (0:00:01.166) 0:44:25.192 ****** 2026-02-03 06:39:37.255581 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-03 06:39:37.255590 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-03 06:39:37.255598 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-03 06:39:37.255606 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-03 06:39:37.255614 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-03 06:39:37.255621 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-03 06:39:37.255629 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-03 06:39:37.255637 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:39:37.255645 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:39:37.255668 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:39:37.255676 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:39:37.255684 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:39:37.255692 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:39:37.255700 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:39:37.255708 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-03 06:39:37.255716 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-03 06:39:37.255728 | orchestrator | 2026-02-03 06:39:37.255741 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:39:37.255753 | orchestrator | Tuesday 03 February 2026 06:39:18 +0000 (0:00:06.457) 0:44:31.650 ****** 2026-02-03 06:39:37.255793 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-03 06:39:37.255805 | orchestrator | 2026-02-03 06:39:37.255816 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-03 06:39:37.255828 | orchestrator | Tuesday 03 February 2026 06:39:19 +0000 (0:00:01.169) 0:44:32.819 ****** 2026-02-03 06:39:37.255839 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 06:39:37.255852 | orchestrator | 2026-02-03 06:39:37.255860 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-03 06:39:37.255866 | orchestrator | Tuesday 03 February 2026 06:39:21 +0000 (0:00:01.523) 0:44:34.343 ****** 2026-02-03 06:39:37.255873 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 06:39:37.255880 | orchestrator | 2026-02-03 06:39:37.255886 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:39:37.255893 | orchestrator | Tuesday 03 February 2026 06:39:22 +0000 (0:00:01.696) 0:44:36.039 ****** 2026-02-03 06:39:37.255899 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255906 | orchestrator | 2026-02-03 06:39:37.255912 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:39:37.255919 | orchestrator | Tuesday 03 February 2026 06:39:23 +0000 (0:00:00.785) 0:44:36.824 ****** 2026-02-03 06:39:37.255926 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255932 | orchestrator | 2026-02-03 06:39:37.255939 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:39:37.255945 | orchestrator | Tuesday 03 February 2026 06:39:24 +0000 (0:00:00.928) 0:44:37.753 ****** 2026-02-03 06:39:37.255952 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255958 | orchestrator | 2026-02-03 06:39:37.255965 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:39:37.255978 | orchestrator | Tuesday 03 February 2026 06:39:25 +0000 (0:00:00.808) 0:44:38.561 ****** 2026-02-03 06:39:37.255989 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.255996 | orchestrator | 2026-02-03 06:39:37.256003 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:39:37.256009 | orchestrator | Tuesday 03 February 2026 06:39:26 +0000 (0:00:00.841) 0:44:39.403 ****** 2026-02-03 06:39:37.256016 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.256022 | orchestrator | 2026-02-03 06:39:37.256029 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:39:37.256036 | orchestrator | Tuesday 03 February 2026 06:39:27 +0000 (0:00:00.803) 0:44:40.207 ****** 2026-02-03 06:39:37.256043 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.256049 | orchestrator | 2026-02-03 06:39:37.256056 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:39:37.256063 | orchestrator | Tuesday 03 February 2026 06:39:27 +0000 (0:00:00.796) 0:44:41.003 ****** 2026-02-03 06:39:37.256069 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.256076 | orchestrator | 2026-02-03 06:39:37.256082 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:39:37.256089 | orchestrator | Tuesday 03 February 2026 06:39:28 +0000 (0:00:00.789) 0:44:41.793 ****** 2026-02-03 06:39:37.256096 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.256102 | orchestrator | 2026-02-03 06:39:37.256109 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:39:37.256115 | orchestrator | Tuesday 03 February 2026 06:39:29 +0000 (0:00:00.788) 0:44:42.582 ****** 2026-02-03 06:39:37.256122 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.256132 | orchestrator | 2026-02-03 06:39:37.256143 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:39:37.256153 | orchestrator | Tuesday 03 February 2026 06:39:30 +0000 (0:00:00.775) 0:44:43.357 ****** 2026-02-03 06:39:37.256163 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:39:37.256176 | orchestrator | 2026-02-03 06:39:37.256189 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:39:37.256196 | orchestrator | Tuesday 03 February 2026 06:39:31 +0000 (0:00:00.850) 0:44:44.208 ****** 2026-02-03 06:39:37.256203 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:39:37.256209 | orchestrator | 2026-02-03 06:39:37.256216 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:39:37.256222 | orchestrator | Tuesday 03 February 2026 06:39:31 +0000 (0:00:00.876) 0:44:45.084 ****** 2026-02-03 06:39:37.256229 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-03 06:39:37.256235 | orchestrator | 2026-02-03 06:39:37.256242 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:39:37.256252 | orchestrator | Tuesday 03 February 2026 06:39:36 +0000 (0:00:04.492) 0:44:49.577 ****** 2026-02-03 06:39:37.256271 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 06:40:20.846206 | orchestrator | 2026-02-03 06:40:20.846330 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:40:20.846347 | orchestrator | Tuesday 03 February 2026 06:39:37 +0000 (0:00:00.848) 0:44:50.425 ****** 2026-02-03 06:40:20.846360 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-03 06:40:20.846376 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-03 06:40:20.846414 | orchestrator | 2026-02-03 06:40:20.846427 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:40:20.846438 | orchestrator | Tuesday 03 February 2026 06:39:45 +0000 (0:00:07.931) 0:44:58.357 ****** 2026-02-03 06:40:20.846449 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.846461 | orchestrator | 2026-02-03 06:40:20.846472 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:40:20.846483 | orchestrator | Tuesday 03 February 2026 06:39:46 +0000 (0:00:00.872) 0:44:59.230 ****** 2026-02-03 06:40:20.846494 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.846504 | orchestrator | 2026-02-03 06:40:20.846516 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:40:20.846528 | orchestrator | Tuesday 03 February 2026 06:39:46 +0000 (0:00:00.893) 0:45:00.124 ****** 2026-02-03 06:40:20.846539 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.846550 | orchestrator | 2026-02-03 06:40:20.846561 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:40:20.846571 | orchestrator | Tuesday 03 February 2026 06:39:47 +0000 (0:00:00.817) 0:45:00.941 ****** 2026-02-03 06:40:20.846582 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.846593 | orchestrator | 2026-02-03 06:40:20.846604 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:40:20.846615 | orchestrator | Tuesday 03 February 2026 06:39:48 +0000 (0:00:00.839) 0:45:01.780 ****** 2026-02-03 06:40:20.846625 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.846636 | orchestrator | 2026-02-03 06:40:20.846647 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:40:20.846673 | orchestrator | Tuesday 03 February 2026 06:39:49 +0000 (0:00:00.822) 0:45:02.603 ****** 2026-02-03 06:40:20.846684 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:40:20.846696 | orchestrator | 2026-02-03 06:40:20.846707 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:40:20.846717 | orchestrator | Tuesday 03 February 2026 06:39:50 +0000 (0:00:00.935) 0:45:03.539 ****** 2026-02-03 06:40:20.846728 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:40:20.846741 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:40:20.846800 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:40:20.846815 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.846828 | orchestrator | 2026-02-03 06:40:20.846840 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:40:20.846852 | orchestrator | Tuesday 03 February 2026 06:39:51 +0000 (0:00:01.198) 0:45:04.737 ****** 2026-02-03 06:40:20.846865 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:40:20.846878 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:40:20.846890 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:40:20.846902 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.846915 | orchestrator | 2026-02-03 06:40:20.846927 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:40:20.846940 | orchestrator | Tuesday 03 February 2026 06:39:52 +0000 (0:00:01.154) 0:45:05.892 ****** 2026-02-03 06:40:20.846952 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:40:20.846965 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:40:20.846977 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:40:20.846989 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.847002 | orchestrator | 2026-02-03 06:40:20.847015 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:40:20.847038 | orchestrator | Tuesday 03 February 2026 06:39:53 +0000 (0:00:01.141) 0:45:07.034 ****** 2026-02-03 06:40:20.847052 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:40:20.847063 | orchestrator | 2026-02-03 06:40:20.847074 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:40:20.847085 | orchestrator | Tuesday 03 February 2026 06:39:54 +0000 (0:00:00.873) 0:45:07.907 ****** 2026-02-03 06:40:20.847096 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 06:40:20.847106 | orchestrator | 2026-02-03 06:40:20.847117 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:40:20.847128 | orchestrator | Tuesday 03 February 2026 06:39:55 +0000 (0:00:01.105) 0:45:09.013 ****** 2026-02-03 06:40:20.847139 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:40:20.847150 | orchestrator | 2026-02-03 06:40:20.847161 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-03 06:40:20.847172 | orchestrator | Tuesday 03 February 2026 06:39:57 +0000 (0:00:01.422) 0:45:10.435 ****** 2026-02-03 06:40:20.847183 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:40:20.847194 | orchestrator | 2026-02-03 06:40:20.847224 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:40:20.847237 | orchestrator | Tuesday 03 February 2026 06:39:58 +0000 (0:00:00.948) 0:45:11.384 ****** 2026-02-03 06:40:20.847248 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:40:20.847260 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:40:20.847271 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:40:20.847282 | orchestrator | 2026-02-03 06:40:20.847293 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-03 06:40:20.847304 | orchestrator | Tuesday 03 February 2026 06:39:59 +0000 (0:00:01.489) 0:45:12.874 ****** 2026-02-03 06:40:20.847314 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-03 06:40:20.847325 | orchestrator | 2026-02-03 06:40:20.847336 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-03 06:40:20.847347 | orchestrator | Tuesday 03 February 2026 06:40:00 +0000 (0:00:01.208) 0:45:14.082 ****** 2026-02-03 06:40:20.847358 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.847369 | orchestrator | 2026-02-03 06:40:20.847380 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-03 06:40:20.847391 | orchestrator | Tuesday 03 February 2026 06:40:02 +0000 (0:00:01.252) 0:45:15.334 ****** 2026-02-03 06:40:20.847402 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.847413 | orchestrator | 2026-02-03 06:40:20.847424 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-03 06:40:20.847435 | orchestrator | Tuesday 03 February 2026 06:40:03 +0000 (0:00:01.157) 0:45:16.492 ****** 2026-02-03 06:40:20.847446 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:40:20.847457 | orchestrator | 2026-02-03 06:40:20.847468 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-03 06:40:20.847479 | orchestrator | Tuesday 03 February 2026 06:40:04 +0000 (0:00:01.587) 0:45:18.079 ****** 2026-02-03 06:40:20.847490 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:40:20.847501 | orchestrator | 2026-02-03 06:40:20.847512 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-03 06:40:20.847523 | orchestrator | Tuesday 03 February 2026 06:40:06 +0000 (0:00:01.336) 0:45:19.416 ****** 2026-02-03 06:40:20.847534 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-03 06:40:20.847546 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-03 06:40:20.847557 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-03 06:40:20.847567 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-03 06:40:20.847585 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-03 06:40:20.847606 | orchestrator | 2026-02-03 06:40:20.847617 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-03 06:40:20.847628 | orchestrator | Tuesday 03 February 2026 06:40:08 +0000 (0:00:02.591) 0:45:22.008 ****** 2026-02-03 06:40:20.847639 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.847650 | orchestrator | 2026-02-03 06:40:20.847661 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-03 06:40:20.847672 | orchestrator | Tuesday 03 February 2026 06:40:09 +0000 (0:00:00.819) 0:45:22.828 ****** 2026-02-03 06:40:20.847683 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-03 06:40:20.847694 | orchestrator | 2026-02-03 06:40:20.847802 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-03 06:40:20.847814 | orchestrator | Tuesday 03 February 2026 06:40:10 +0000 (0:00:01.189) 0:45:24.017 ****** 2026-02-03 06:40:20.847825 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-03 06:40:20.847836 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-03 06:40:20.847847 | orchestrator | 2026-02-03 06:40:20.847858 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-03 06:40:20.847869 | orchestrator | Tuesday 03 February 2026 06:40:12 +0000 (0:00:01.920) 0:45:25.938 ****** 2026-02-03 06:40:20.847880 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:40:20.847892 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-03 06:40:20.847903 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 06:40:20.847914 | orchestrator | 2026-02-03 06:40:20.847925 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-03 06:40:20.847936 | orchestrator | Tuesday 03 February 2026 06:40:16 +0000 (0:00:03.777) 0:45:29.715 ****** 2026-02-03 06:40:20.847947 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-03 06:40:20.847958 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-03 06:40:20.847969 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:40:20.847980 | orchestrator | 2026-02-03 06:40:20.847991 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-03 06:40:20.848002 | orchestrator | Tuesday 03 February 2026 06:40:18 +0000 (0:00:01.720) 0:45:31.436 ****** 2026-02-03 06:40:20.848013 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.848024 | orchestrator | 2026-02-03 06:40:20.848035 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-03 06:40:20.848046 | orchestrator | Tuesday 03 February 2026 06:40:19 +0000 (0:00:00.950) 0:45:32.386 ****** 2026-02-03 06:40:20.848057 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.848068 | orchestrator | 2026-02-03 06:40:20.848079 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-03 06:40:20.848090 | orchestrator | Tuesday 03 February 2026 06:40:19 +0000 (0:00:00.794) 0:45:33.181 ****** 2026-02-03 06:40:20.848101 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:40:20.848112 | orchestrator | 2026-02-03 06:40:20.848131 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-03 06:41:31.081608 | orchestrator | Tuesday 03 February 2026 06:40:20 +0000 (0:00:00.835) 0:45:34.016 ****** 2026-02-03 06:41:31.081723 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-03 06:41:31.081740 | orchestrator | 2026-02-03 06:41:31.081753 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-03 06:41:31.081818 | orchestrator | Tuesday 03 February 2026 06:40:22 +0000 (0:00:01.187) 0:45:35.204 ****** 2026-02-03 06:41:31.081830 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:41:31.081843 | orchestrator | 2026-02-03 06:41:31.081854 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-03 06:41:31.081866 | orchestrator | Tuesday 03 February 2026 06:40:23 +0000 (0:00:01.551) 0:45:36.755 ****** 2026-02-03 06:41:31.081901 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:41:31.081913 | orchestrator | 2026-02-03 06:41:31.081924 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-03 06:41:31.081934 | orchestrator | Tuesday 03 February 2026 06:40:27 +0000 (0:00:03.534) 0:45:40.289 ****** 2026-02-03 06:41:31.081945 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-03 06:41:31.081956 | orchestrator | 2026-02-03 06:41:31.081967 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-03 06:41:31.081978 | orchestrator | Tuesday 03 February 2026 06:40:28 +0000 (0:00:01.138) 0:45:41.428 ****** 2026-02-03 06:41:31.081988 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:41:31.081999 | orchestrator | 2026-02-03 06:41:31.082010 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-03 06:41:31.082080 | orchestrator | Tuesday 03 February 2026 06:40:30 +0000 (0:00:02.045) 0:45:43.474 ****** 2026-02-03 06:41:31.082091 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:41:31.082102 | orchestrator | 2026-02-03 06:41:31.082113 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-03 06:41:31.082124 | orchestrator | Tuesday 03 February 2026 06:40:32 +0000 (0:00:02.053) 0:45:45.528 ****** 2026-02-03 06:41:31.082135 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:41:31.082148 | orchestrator | 2026-02-03 06:41:31.082161 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-03 06:41:31.082173 | orchestrator | Tuesday 03 February 2026 06:40:34 +0000 (0:00:02.514) 0:45:48.042 ****** 2026-02-03 06:41:31.082188 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:41:31.082201 | orchestrator | 2026-02-03 06:41:31.082214 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-03 06:41:31.082227 | orchestrator | Tuesday 03 February 2026 06:40:36 +0000 (0:00:01.321) 0:45:49.364 ****** 2026-02-03 06:41:31.082239 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:41:31.082252 | orchestrator | 2026-02-03 06:41:31.082265 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-03 06:41:31.082278 | orchestrator | Tuesday 03 February 2026 06:40:37 +0000 (0:00:01.186) 0:45:50.551 ****** 2026-02-03 06:41:31.082307 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-03 06:41:31.082322 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-03 06:41:31.082334 | orchestrator | 2026-02-03 06:41:31.082347 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-03 06:41:31.082361 | orchestrator | Tuesday 03 February 2026 06:40:39 +0000 (0:00:02.009) 0:45:52.560 ****** 2026-02-03 06:41:31.082374 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-03 06:41:31.082386 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-03 06:41:31.082399 | orchestrator | 2026-02-03 06:41:31.082412 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-03 06:41:31.082426 | orchestrator | Tuesday 03 February 2026 06:40:42 +0000 (0:00:02.974) 0:45:55.535 ****** 2026-02-03 06:41:31.082439 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-03 06:41:31.082451 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-03 06:41:31.082464 | orchestrator | 2026-02-03 06:41:31.082477 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-03 06:41:31.082491 | orchestrator | Tuesday 03 February 2026 06:40:46 +0000 (0:00:04.551) 0:46:00.086 ****** 2026-02-03 06:41:31.082504 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:41:31.082514 | orchestrator | 2026-02-03 06:41:31.082525 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-03 06:41:31.082536 | orchestrator | Tuesday 03 February 2026 06:40:47 +0000 (0:00:01.023) 0:46:01.110 ****** 2026-02-03 06:41:31.082547 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:41:31.082558 | orchestrator | 2026-02-03 06:41:31.082568 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-03 06:41:31.082579 | orchestrator | Tuesday 03 February 2026 06:40:48 +0000 (0:00:00.972) 0:46:02.082 ****** 2026-02-03 06:41:31.082590 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:41:31.082608 | orchestrator | 2026-02-03 06:41:31.082620 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-03 06:41:31.082630 | orchestrator | Tuesday 03 February 2026 06:40:49 +0000 (0:00:00.988) 0:46:03.071 ****** 2026-02-03 06:41:31.082641 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:41:31.082652 | orchestrator | 2026-02-03 06:41:31.082663 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-03 06:41:31.082674 | orchestrator | Tuesday 03 February 2026 06:40:50 +0000 (0:00:00.873) 0:46:03.944 ****** 2026-02-03 06:41:31.082685 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:41:31.082695 | orchestrator | 2026-02-03 06:41:31.082706 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-03 06:41:31.082717 | orchestrator | Tuesday 03 February 2026 06:40:51 +0000 (0:00:00.835) 0:46:04.780 ****** 2026-02-03 06:41:31.082728 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-03 06:41:31.082740 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-03 06:41:31.082751 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-03 06:41:31.082803 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-03 06:41:31.082816 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-02-03 06:41:31.082827 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:41:31.082839 | orchestrator | 2026-02-03 06:41:31.082849 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-03 06:41:31.082860 | orchestrator | 2026-02-03 06:41:31.082871 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:41:31.082882 | orchestrator | Tuesday 03 February 2026 06:41:09 +0000 (0:00:17.497) 0:46:22.277 ****** 2026-02-03 06:41:31.082893 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-03 06:41:31.082904 | orchestrator | 2026-02-03 06:41:31.082915 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:41:31.082926 | orchestrator | Tuesday 03 February 2026 06:41:10 +0000 (0:00:01.181) 0:46:23.458 ****** 2026-02-03 06:41:31.082937 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:31.082948 | orchestrator | 2026-02-03 06:41:31.082958 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:41:31.082969 | orchestrator | Tuesday 03 February 2026 06:41:11 +0000 (0:00:01.490) 0:46:24.948 ****** 2026-02-03 06:41:31.082980 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:31.082991 | orchestrator | 2026-02-03 06:41:31.083001 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:41:31.083012 | orchestrator | Tuesday 03 February 2026 06:41:12 +0000 (0:00:01.169) 0:46:26.118 ****** 2026-02-03 06:41:31.083023 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:31.083034 | orchestrator | 2026-02-03 06:41:31.083044 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:41:31.083055 | orchestrator | Tuesday 03 February 2026 06:41:14 +0000 (0:00:01.519) 0:46:27.638 ****** 2026-02-03 06:41:31.083066 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:31.083077 | orchestrator | 2026-02-03 06:41:31.083088 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:41:31.083099 | orchestrator | Tuesday 03 February 2026 06:41:15 +0000 (0:00:01.272) 0:46:28.910 ****** 2026-02-03 06:41:31.083109 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:31.083120 | orchestrator | 2026-02-03 06:41:31.083131 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:41:31.083142 | orchestrator | Tuesday 03 February 2026 06:41:16 +0000 (0:00:01.188) 0:46:30.099 ****** 2026-02-03 06:41:31.083153 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:31.083172 | orchestrator | 2026-02-03 06:41:31.083183 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:41:31.083194 | orchestrator | Tuesday 03 February 2026 06:41:18 +0000 (0:00:01.258) 0:46:31.357 ****** 2026-02-03 06:41:31.083205 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:31.083215 | orchestrator | 2026-02-03 06:41:31.083232 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:41:31.083243 | orchestrator | Tuesday 03 February 2026 06:41:19 +0000 (0:00:01.195) 0:46:32.553 ****** 2026-02-03 06:41:31.083254 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:31.083265 | orchestrator | 2026-02-03 06:41:31.083276 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:41:31.083287 | orchestrator | Tuesday 03 February 2026 06:41:20 +0000 (0:00:01.210) 0:46:33.764 ****** 2026-02-03 06:41:31.083298 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:41:31.083309 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:41:31.083320 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:41:31.083330 | orchestrator | 2026-02-03 06:41:31.083341 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:41:31.083352 | orchestrator | Tuesday 03 February 2026 06:41:22 +0000 (0:00:02.208) 0:46:35.973 ****** 2026-02-03 06:41:31.083363 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:31.083374 | orchestrator | 2026-02-03 06:41:31.083384 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:41:31.083395 | orchestrator | Tuesday 03 February 2026 06:41:24 +0000 (0:00:01.363) 0:46:37.336 ****** 2026-02-03 06:41:31.083406 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:41:31.083417 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:41:31.083427 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:41:31.083438 | orchestrator | 2026-02-03 06:41:31.083449 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:41:31.083460 | orchestrator | Tuesday 03 February 2026 06:41:27 +0000 (0:00:03.591) 0:46:40.928 ****** 2026-02-03 06:41:31.083471 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-03 06:41:31.083482 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-03 06:41:31.083493 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-03 06:41:31.083503 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:31.083514 | orchestrator | 2026-02-03 06:41:31.083525 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:41:31.083536 | orchestrator | Tuesday 03 February 2026 06:41:29 +0000 (0:00:01.624) 0:46:42.553 ****** 2026-02-03 06:41:31.083548 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:41:31.083568 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:41:52.370972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:41:52.371065 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:52.371074 | orchestrator | 2026-02-03 06:41:52.371081 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:41:52.371088 | orchestrator | Tuesday 03 February 2026 06:41:31 +0000 (0:00:01.702) 0:46:44.256 ****** 2026-02-03 06:41:52.371122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:52.371132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:52.371138 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:52.371144 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:52.371150 | orchestrator | 2026-02-03 06:41:52.371167 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:41:52.371172 | orchestrator | Tuesday 03 February 2026 06:41:32 +0000 (0:00:01.222) 0:46:45.478 ****** 2026-02-03 06:41:52.371180 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:41:25.247332', 'end': '2026-02-03 06:41:25.293442', 'delta': '0:00:00.046110', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:41:52.371188 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:41:25.823964', 'end': '2026-02-03 06:41:25.865751', 'delta': '0:00:00.041787', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:41:52.371206 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:41:26.426290', 'end': '2026-02-03 06:41:26.472495', 'delta': '0:00:00.046205', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:41:52.371212 | orchestrator | 2026-02-03 06:41:52.371224 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:41:52.371229 | orchestrator | Tuesday 03 February 2026 06:41:33 +0000 (0:00:01.260) 0:46:46.739 ****** 2026-02-03 06:41:52.371235 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:52.371241 | orchestrator | 2026-02-03 06:41:52.371247 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:41:52.371253 | orchestrator | Tuesday 03 February 2026 06:41:34 +0000 (0:00:01.311) 0:46:48.051 ****** 2026-02-03 06:41:52.371258 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:52.371263 | orchestrator | 2026-02-03 06:41:52.371269 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:41:52.371274 | orchestrator | Tuesday 03 February 2026 06:41:36 +0000 (0:00:01.342) 0:46:49.394 ****** 2026-02-03 06:41:52.371280 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:52.371285 | orchestrator | 2026-02-03 06:41:52.371291 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:41:52.371296 | orchestrator | Tuesday 03 February 2026 06:41:37 +0000 (0:00:01.193) 0:46:50.587 ****** 2026-02-03 06:41:52.371302 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:41:52.371307 | orchestrator | 2026-02-03 06:41:52.371312 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:41:52.371318 | orchestrator | Tuesday 03 February 2026 06:41:39 +0000 (0:00:02.064) 0:46:52.652 ****** 2026-02-03 06:41:52.371323 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:52.371329 | orchestrator | 2026-02-03 06:41:52.371334 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:41:52.371339 | orchestrator | Tuesday 03 February 2026 06:41:40 +0000 (0:00:01.216) 0:46:53.869 ****** 2026-02-03 06:41:52.371345 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:52.371350 | orchestrator | 2026-02-03 06:41:52.371356 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:41:52.371361 | orchestrator | Tuesday 03 February 2026 06:41:41 +0000 (0:00:01.179) 0:46:55.049 ****** 2026-02-03 06:41:52.371367 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:52.371372 | orchestrator | 2026-02-03 06:41:52.371377 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:41:52.371383 | orchestrator | Tuesday 03 February 2026 06:41:43 +0000 (0:00:01.284) 0:46:56.334 ****** 2026-02-03 06:41:52.371388 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:52.371394 | orchestrator | 2026-02-03 06:41:52.371399 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:41:52.371408 | orchestrator | Tuesday 03 February 2026 06:41:44 +0000 (0:00:01.262) 0:46:57.596 ****** 2026-02-03 06:41:52.371415 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:52.371425 | orchestrator | 2026-02-03 06:41:52.371434 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:41:52.371442 | orchestrator | Tuesday 03 February 2026 06:41:45 +0000 (0:00:01.289) 0:46:58.885 ****** 2026-02-03 06:41:52.371450 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:52.371458 | orchestrator | 2026-02-03 06:41:52.371467 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:41:52.371475 | orchestrator | Tuesday 03 February 2026 06:41:47 +0000 (0:00:01.346) 0:47:00.232 ****** 2026-02-03 06:41:52.371483 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:52.371492 | orchestrator | 2026-02-03 06:41:52.371500 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:41:52.371508 | orchestrator | Tuesday 03 February 2026 06:41:48 +0000 (0:00:01.251) 0:47:01.483 ****** 2026-02-03 06:41:52.371516 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:52.371524 | orchestrator | 2026-02-03 06:41:52.371533 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:41:52.371541 | orchestrator | Tuesday 03 February 2026 06:41:49 +0000 (0:00:01.310) 0:47:02.794 ****** 2026-02-03 06:41:52.371549 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:52.371563 | orchestrator | 2026-02-03 06:41:52.371572 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:41:52.371582 | orchestrator | Tuesday 03 February 2026 06:41:50 +0000 (0:00:01.248) 0:47:04.043 ****** 2026-02-03 06:41:52.371591 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:52.371599 | orchestrator | 2026-02-03 06:41:52.371607 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:41:52.371615 | orchestrator | Tuesday 03 February 2026 06:41:52 +0000 (0:00:01.239) 0:47:05.283 ****** 2026-02-03 06:41:52.371624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:41:52.371639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'uuids': ['de4b76bf-9af2-40ae-a6b3-4edbecd71396'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a']}})  2026-02-03 06:41:52.391697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ed5f26b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:41:52.391838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb']}})  2026-02-03 06:41:52.391871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:41:52.391883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:41:52.391910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:41:52.391919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:41:52.391928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca', 'dm-uuid-CRYPT-LUKS2-828a04c154134531b57bb1d5e612c63b-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:41:52.391952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:41:52.391974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'uuids': ['828a04c1-5413-4531-b57b-b1d5e612c63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca']}})  2026-02-03 06:41:52.392004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8']}})  2026-02-03 06:41:52.392014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:41:52.392038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e34e583', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:41:53.837282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:41:53.837391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:41:53.837411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a', 'dm-uuid-CRYPT-LUKS2-de4b76bf9af240aea6b34edbecd71396-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:41:53.837427 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:53.837440 | orchestrator | 2026-02-03 06:41:53.837469 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:41:53.837482 | orchestrator | Tuesday 03 February 2026 06:41:53 +0000 (0:00:01.490) 0:47:06.773 ****** 2026-02-03 06:41:53.837519 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:53.837534 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'uuids': ['de4b76bf-9af2-40ae-a6b3-4edbecd71396'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:53.837547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ed5f26b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:53.837589 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:53.837614 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:53.837652 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:53.837672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:53.837690 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:53.837719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca', 'dm-uuid-CRYPT-LUKS2-828a04c154134531b57bb1d5e612c63b-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:59.572663 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:59.572805 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'uuids': ['828a04c1-5413-4531-b57b-b1d5e612c63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:59.572841 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:59.572855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:59.572918 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e34e583', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:59.572952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:59.572968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:59.572983 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a', 'dm-uuid-CRYPT-LUKS2-de4b76bf9af240aea6b34edbecd71396-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:41:59.572998 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:41:59.573013 | orchestrator | 2026-02-03 06:41:59.573028 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:41:59.573037 | orchestrator | Tuesday 03 February 2026 06:41:55 +0000 (0:00:01.549) 0:47:08.323 ****** 2026-02-03 06:41:59.573045 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:59.573054 | orchestrator | 2026-02-03 06:41:59.573063 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:41:59.573070 | orchestrator | Tuesday 03 February 2026 06:41:56 +0000 (0:00:01.587) 0:47:09.911 ****** 2026-02-03 06:41:59.573078 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:59.573086 | orchestrator | 2026-02-03 06:41:59.573094 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:41:59.573102 | orchestrator | Tuesday 03 February 2026 06:41:57 +0000 (0:00:01.254) 0:47:11.166 ****** 2026-02-03 06:41:59.573109 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:41:59.573117 | orchestrator | 2026-02-03 06:41:59.573125 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:41:59.573140 | orchestrator | Tuesday 03 February 2026 06:41:59 +0000 (0:00:01.579) 0:47:12.746 ****** 2026-02-03 06:42:45.392268 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.392396 | orchestrator | 2026-02-03 06:42:45.392420 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:42:45.392435 | orchestrator | Tuesday 03 February 2026 06:42:00 +0000 (0:00:01.322) 0:47:14.068 ****** 2026-02-03 06:42:45.392449 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.392461 | orchestrator | 2026-02-03 06:42:45.392475 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:42:45.392517 | orchestrator | Tuesday 03 February 2026 06:42:02 +0000 (0:00:01.374) 0:47:15.443 ****** 2026-02-03 06:42:45.392532 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.392547 | orchestrator | 2026-02-03 06:42:45.392562 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:42:45.392576 | orchestrator | Tuesday 03 February 2026 06:42:03 +0000 (0:00:01.221) 0:47:16.664 ****** 2026-02-03 06:42:45.392590 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-03 06:42:45.392603 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-03 06:42:45.392616 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-03 06:42:45.392629 | orchestrator | 2026-02-03 06:42:45.392642 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:42:45.392655 | orchestrator | Tuesday 03 February 2026 06:42:05 +0000 (0:00:02.321) 0:47:18.985 ****** 2026-02-03 06:42:45.392668 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-03 06:42:45.392682 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-03 06:42:45.392696 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-03 06:42:45.392709 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.392722 | orchestrator | 2026-02-03 06:42:45.392735 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:42:45.392750 | orchestrator | Tuesday 03 February 2026 06:42:07 +0000 (0:00:01.237) 0:47:20.223 ****** 2026-02-03 06:42:45.392808 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-03 06:42:45.392824 | orchestrator | 2026-02-03 06:42:45.392839 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:42:45.392854 | orchestrator | Tuesday 03 February 2026 06:42:08 +0000 (0:00:01.244) 0:47:21.468 ****** 2026-02-03 06:42:45.392867 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.392880 | orchestrator | 2026-02-03 06:42:45.392891 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:42:45.392904 | orchestrator | Tuesday 03 February 2026 06:42:09 +0000 (0:00:01.195) 0:47:22.663 ****** 2026-02-03 06:42:45.392917 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.392929 | orchestrator | 2026-02-03 06:42:45.392942 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:42:45.392955 | orchestrator | Tuesday 03 February 2026 06:42:10 +0000 (0:00:01.291) 0:47:23.955 ****** 2026-02-03 06:42:45.392968 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.392980 | orchestrator | 2026-02-03 06:42:45.392992 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:42:45.393005 | orchestrator | Tuesday 03 February 2026 06:42:12 +0000 (0:00:01.268) 0:47:25.223 ****** 2026-02-03 06:42:45.393018 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:42:45.393030 | orchestrator | 2026-02-03 06:42:45.393043 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:42:45.393055 | orchestrator | Tuesday 03 February 2026 06:42:13 +0000 (0:00:01.275) 0:47:26.499 ****** 2026-02-03 06:42:45.393067 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 06:42:45.393079 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 06:42:45.393092 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 06:42:45.393105 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.393118 | orchestrator | 2026-02-03 06:42:45.393130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:42:45.393142 | orchestrator | Tuesday 03 February 2026 06:42:14 +0000 (0:00:01.560) 0:47:28.059 ****** 2026-02-03 06:42:45.393154 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 06:42:45.393165 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 06:42:45.393177 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 06:42:45.393203 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.393217 | orchestrator | 2026-02-03 06:42:45.393230 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:42:45.393243 | orchestrator | Tuesday 03 February 2026 06:42:16 +0000 (0:00:01.550) 0:47:29.610 ****** 2026-02-03 06:42:45.393257 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 06:42:45.393271 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 06:42:45.393285 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 06:42:45.393297 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.393309 | orchestrator | 2026-02-03 06:42:45.393320 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:42:45.393332 | orchestrator | Tuesday 03 February 2026 06:42:17 +0000 (0:00:01.505) 0:47:31.115 ****** 2026-02-03 06:42:45.393345 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:42:45.393357 | orchestrator | 2026-02-03 06:42:45.393369 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:42:45.393381 | orchestrator | Tuesday 03 February 2026 06:42:19 +0000 (0:00:01.223) 0:47:32.339 ****** 2026-02-03 06:42:45.393393 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 06:42:45.393405 | orchestrator | 2026-02-03 06:42:45.393416 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:42:45.393429 | orchestrator | Tuesday 03 February 2026 06:42:21 +0000 (0:00:02.016) 0:47:34.355 ****** 2026-02-03 06:42:45.393467 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:42:45.393483 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:42:45.393495 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:42:45.393508 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:42:45.393521 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:42:45.393534 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-03 06:42:45.393547 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:42:45.393559 | orchestrator | 2026-02-03 06:42:45.393571 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:42:45.393583 | orchestrator | Tuesday 03 February 2026 06:42:23 +0000 (0:00:02.610) 0:47:36.966 ****** 2026-02-03 06:42:45.393595 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:42:45.393607 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:42:45.393618 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:42:45.393631 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:42:45.393644 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:42:45.393658 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-03 06:42:45.393670 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:42:45.393684 | orchestrator | 2026-02-03 06:42:45.393697 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-03 06:42:45.393720 | orchestrator | Tuesday 03 February 2026 06:42:26 +0000 (0:00:02.538) 0:47:39.504 ****** 2026-02-03 06:42:45.393734 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:42:45.393748 | orchestrator | 2026-02-03 06:42:45.393763 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-03 06:42:45.393810 | orchestrator | Tuesday 03 February 2026 06:42:27 +0000 (0:00:01.190) 0:47:40.694 ****** 2026-02-03 06:42:45.393826 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:42:45.393839 | orchestrator | 2026-02-03 06:42:45.393851 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-03 06:42:45.393877 | orchestrator | Tuesday 03 February 2026 06:42:28 +0000 (0:00:00.853) 0:47:41.548 ****** 2026-02-03 06:42:45.393890 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:42:45.393903 | orchestrator | 2026-02-03 06:42:45.393916 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-03 06:42:45.393929 | orchestrator | Tuesday 03 February 2026 06:42:29 +0000 (0:00:00.917) 0:47:42.465 ****** 2026-02-03 06:42:45.393942 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-03 06:42:45.393955 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-03 06:42:45.393970 | orchestrator | 2026-02-03 06:42:45.393982 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:42:45.393994 | orchestrator | Tuesday 03 February 2026 06:42:33 +0000 (0:00:03.916) 0:47:46.382 ****** 2026-02-03 06:42:45.394005 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-03 06:42:45.394067 | orchestrator | 2026-02-03 06:42:45.394080 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:42:45.394091 | orchestrator | Tuesday 03 February 2026 06:42:34 +0000 (0:00:01.199) 0:47:47.581 ****** 2026-02-03 06:42:45.394101 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-03 06:42:45.394112 | orchestrator | 2026-02-03 06:42:45.394123 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:42:45.394133 | orchestrator | Tuesday 03 February 2026 06:42:35 +0000 (0:00:01.266) 0:47:48.847 ****** 2026-02-03 06:42:45.394144 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.394154 | orchestrator | 2026-02-03 06:42:45.394165 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:42:45.394175 | orchestrator | Tuesday 03 February 2026 06:42:36 +0000 (0:00:01.243) 0:47:50.091 ****** 2026-02-03 06:42:45.394186 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:42:45.394196 | orchestrator | 2026-02-03 06:42:45.394206 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:42:45.394216 | orchestrator | Tuesday 03 February 2026 06:42:38 +0000 (0:00:01.557) 0:47:51.649 ****** 2026-02-03 06:42:45.394226 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:42:45.394237 | orchestrator | 2026-02-03 06:42:45.394248 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:42:45.394259 | orchestrator | Tuesday 03 February 2026 06:42:40 +0000 (0:00:01.646) 0:47:53.295 ****** 2026-02-03 06:42:45.394270 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:42:45.394280 | orchestrator | 2026-02-03 06:42:45.394292 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:42:45.394303 | orchestrator | Tuesday 03 February 2026 06:42:41 +0000 (0:00:01.615) 0:47:54.910 ****** 2026-02-03 06:42:45.394314 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.394324 | orchestrator | 2026-02-03 06:42:45.394334 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:42:45.394346 | orchestrator | Tuesday 03 February 2026 06:42:42 +0000 (0:00:01.249) 0:47:56.160 ****** 2026-02-03 06:42:45.394356 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.394367 | orchestrator | 2026-02-03 06:42:45.394378 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:42:45.394388 | orchestrator | Tuesday 03 February 2026 06:42:44 +0000 (0:00:01.184) 0:47:57.344 ****** 2026-02-03 06:42:45.394400 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:42:45.394410 | orchestrator | 2026-02-03 06:42:45.394437 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:43:27.919886 | orchestrator | Tuesday 03 February 2026 06:42:45 +0000 (0:00:01.224) 0:47:58.569 ****** 2026-02-03 06:43:27.920003 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.920021 | orchestrator | 2026-02-03 06:43:27.920035 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:43:27.920046 | orchestrator | Tuesday 03 February 2026 06:42:47 +0000 (0:00:01.663) 0:48:00.232 ****** 2026-02-03 06:43:27.920086 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.920098 | orchestrator | 2026-02-03 06:43:27.920110 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:43:27.920120 | orchestrator | Tuesday 03 February 2026 06:42:48 +0000 (0:00:01.681) 0:48:01.914 ****** 2026-02-03 06:43:27.920131 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920143 | orchestrator | 2026-02-03 06:43:27.920153 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:43:27.920164 | orchestrator | Tuesday 03 February 2026 06:42:49 +0000 (0:00:00.804) 0:48:02.718 ****** 2026-02-03 06:43:27.920175 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920186 | orchestrator | 2026-02-03 06:43:27.920196 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:43:27.920207 | orchestrator | Tuesday 03 February 2026 06:42:50 +0000 (0:00:00.811) 0:48:03.530 ****** 2026-02-03 06:43:27.920218 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.920229 | orchestrator | 2026-02-03 06:43:27.920239 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:43:27.920250 | orchestrator | Tuesday 03 February 2026 06:42:51 +0000 (0:00:00.861) 0:48:04.391 ****** 2026-02-03 06:43:27.920260 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.920271 | orchestrator | 2026-02-03 06:43:27.920282 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:43:27.920293 | orchestrator | Tuesday 03 February 2026 06:42:52 +0000 (0:00:00.813) 0:48:05.205 ****** 2026-02-03 06:43:27.920303 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.920314 | orchestrator | 2026-02-03 06:43:27.920341 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:43:27.920354 | orchestrator | Tuesday 03 February 2026 06:42:52 +0000 (0:00:00.819) 0:48:06.024 ****** 2026-02-03 06:43:27.920367 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920379 | orchestrator | 2026-02-03 06:43:27.920393 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:43:27.920408 | orchestrator | Tuesday 03 February 2026 06:42:53 +0000 (0:00:00.818) 0:48:06.843 ****** 2026-02-03 06:43:27.920420 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920432 | orchestrator | 2026-02-03 06:43:27.920444 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:43:27.920457 | orchestrator | Tuesday 03 February 2026 06:42:54 +0000 (0:00:00.921) 0:48:07.764 ****** 2026-02-03 06:43:27.920470 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920483 | orchestrator | 2026-02-03 06:43:27.920495 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:43:27.920507 | orchestrator | Tuesday 03 February 2026 06:42:55 +0000 (0:00:00.950) 0:48:08.715 ****** 2026-02-03 06:43:27.920520 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.920532 | orchestrator | 2026-02-03 06:43:27.920544 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:43:27.920557 | orchestrator | Tuesday 03 February 2026 06:42:56 +0000 (0:00:00.871) 0:48:09.586 ****** 2026-02-03 06:43:27.920569 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.920582 | orchestrator | 2026-02-03 06:43:27.920595 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:43:27.920607 | orchestrator | Tuesday 03 February 2026 06:42:57 +0000 (0:00:00.841) 0:48:10.428 ****** 2026-02-03 06:43:27.920620 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920632 | orchestrator | 2026-02-03 06:43:27.920644 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:43:27.920657 | orchestrator | Tuesday 03 February 2026 06:42:58 +0000 (0:00:00.874) 0:48:11.302 ****** 2026-02-03 06:43:27.920670 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920682 | orchestrator | 2026-02-03 06:43:27.920694 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:43:27.920706 | orchestrator | Tuesday 03 February 2026 06:42:58 +0000 (0:00:00.824) 0:48:12.126 ****** 2026-02-03 06:43:27.920724 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920735 | orchestrator | 2026-02-03 06:43:27.920745 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:43:27.920756 | orchestrator | Tuesday 03 February 2026 06:42:59 +0000 (0:00:00.796) 0:48:12.922 ****** 2026-02-03 06:43:27.920767 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920778 | orchestrator | 2026-02-03 06:43:27.920814 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:43:27.920834 | orchestrator | Tuesday 03 February 2026 06:43:00 +0000 (0:00:00.838) 0:48:13.761 ****** 2026-02-03 06:43:27.920854 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920872 | orchestrator | 2026-02-03 06:43:27.920889 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:43:27.920900 | orchestrator | Tuesday 03 February 2026 06:43:01 +0000 (0:00:00.810) 0:48:14.572 ****** 2026-02-03 06:43:27.920911 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920921 | orchestrator | 2026-02-03 06:43:27.920932 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:43:27.920943 | orchestrator | Tuesday 03 February 2026 06:43:02 +0000 (0:00:00.811) 0:48:15.384 ****** 2026-02-03 06:43:27.920953 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.920964 | orchestrator | 2026-02-03 06:43:27.920975 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:43:27.920986 | orchestrator | Tuesday 03 February 2026 06:43:03 +0000 (0:00:00.870) 0:48:16.255 ****** 2026-02-03 06:43:27.920997 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921008 | orchestrator | 2026-02-03 06:43:27.921018 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:43:27.921029 | orchestrator | Tuesday 03 February 2026 06:43:03 +0000 (0:00:00.865) 0:48:17.120 ****** 2026-02-03 06:43:27.921059 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921071 | orchestrator | 2026-02-03 06:43:27.921082 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:43:27.921093 | orchestrator | Tuesday 03 February 2026 06:43:04 +0000 (0:00:00.918) 0:48:18.038 ****** 2026-02-03 06:43:27.921103 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921114 | orchestrator | 2026-02-03 06:43:27.921125 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:43:27.921135 | orchestrator | Tuesday 03 February 2026 06:43:05 +0000 (0:00:00.853) 0:48:18.892 ****** 2026-02-03 06:43:27.921146 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921157 | orchestrator | 2026-02-03 06:43:27.921167 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:43:27.921178 | orchestrator | Tuesday 03 February 2026 06:43:06 +0000 (0:00:00.909) 0:48:19.801 ****** 2026-02-03 06:43:27.921189 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921199 | orchestrator | 2026-02-03 06:43:27.921210 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:43:27.921221 | orchestrator | Tuesday 03 February 2026 06:43:07 +0000 (0:00:00.835) 0:48:20.637 ****** 2026-02-03 06:43:27.921231 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.921242 | orchestrator | 2026-02-03 06:43:27.921252 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:43:27.921263 | orchestrator | Tuesday 03 February 2026 06:43:09 +0000 (0:00:01.672) 0:48:22.310 ****** 2026-02-03 06:43:27.921274 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.921284 | orchestrator | 2026-02-03 06:43:27.921295 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:43:27.921305 | orchestrator | Tuesday 03 February 2026 06:43:11 +0000 (0:00:01.970) 0:48:24.280 ****** 2026-02-03 06:43:27.921316 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-03 06:43:27.921328 | orchestrator | 2026-02-03 06:43:27.921346 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:43:27.921365 | orchestrator | Tuesday 03 February 2026 06:43:12 +0000 (0:00:01.247) 0:48:25.528 ****** 2026-02-03 06:43:27.921376 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921386 | orchestrator | 2026-02-03 06:43:27.921397 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:43:27.921408 | orchestrator | Tuesday 03 February 2026 06:43:13 +0000 (0:00:01.190) 0:48:26.719 ****** 2026-02-03 06:43:27.921418 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921429 | orchestrator | 2026-02-03 06:43:27.921440 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:43:27.921451 | orchestrator | Tuesday 03 February 2026 06:43:14 +0000 (0:00:01.231) 0:48:27.950 ****** 2026-02-03 06:43:27.921462 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:43:27.921472 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:43:27.921483 | orchestrator | 2026-02-03 06:43:27.921494 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:43:27.921505 | orchestrator | Tuesday 03 February 2026 06:43:16 +0000 (0:00:01.892) 0:48:29.843 ****** 2026-02-03 06:43:27.921515 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.921526 | orchestrator | 2026-02-03 06:43:27.921537 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:43:27.921547 | orchestrator | Tuesday 03 February 2026 06:43:18 +0000 (0:00:01.519) 0:48:31.363 ****** 2026-02-03 06:43:27.921558 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921569 | orchestrator | 2026-02-03 06:43:27.921579 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:43:27.921590 | orchestrator | Tuesday 03 February 2026 06:43:19 +0000 (0:00:01.213) 0:48:32.576 ****** 2026-02-03 06:43:27.921600 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921611 | orchestrator | 2026-02-03 06:43:27.921622 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:43:27.921632 | orchestrator | Tuesday 03 February 2026 06:43:20 +0000 (0:00:00.970) 0:48:33.546 ****** 2026-02-03 06:43:27.921643 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921654 | orchestrator | 2026-02-03 06:43:27.921664 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:43:27.921675 | orchestrator | Tuesday 03 February 2026 06:43:21 +0000 (0:00:00.838) 0:48:34.385 ****** 2026-02-03 06:43:27.921686 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-03 06:43:27.921696 | orchestrator | 2026-02-03 06:43:27.921707 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:43:27.921718 | orchestrator | Tuesday 03 February 2026 06:43:22 +0000 (0:00:01.212) 0:48:35.597 ****** 2026-02-03 06:43:27.921728 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:43:27.921739 | orchestrator | 2026-02-03 06:43:27.921749 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:43:27.921760 | orchestrator | Tuesday 03 February 2026 06:43:24 +0000 (0:00:01.774) 0:48:37.371 ****** 2026-02-03 06:43:27.921771 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:43:27.921782 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:43:27.921833 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:43:27.921845 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921855 | orchestrator | 2026-02-03 06:43:27.921866 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:43:27.921877 | orchestrator | Tuesday 03 February 2026 06:43:25 +0000 (0:00:01.258) 0:48:38.630 ****** 2026-02-03 06:43:27.921887 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:43:27.921897 | orchestrator | 2026-02-03 06:43:27.921908 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:43:27.921927 | orchestrator | Tuesday 03 February 2026 06:43:26 +0000 (0:00:01.231) 0:48:39.861 ****** 2026-02-03 06:43:27.921945 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201371 | orchestrator | 2026-02-03 06:44:13.201471 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:44:13.201484 | orchestrator | Tuesday 03 February 2026 06:43:27 +0000 (0:00:01.229) 0:48:41.091 ****** 2026-02-03 06:44:13.201493 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201502 | orchestrator | 2026-02-03 06:44:13.201510 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:44:13.201519 | orchestrator | Tuesday 03 February 2026 06:43:29 +0000 (0:00:01.257) 0:48:42.349 ****** 2026-02-03 06:44:13.201527 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201535 | orchestrator | 2026-02-03 06:44:13.201544 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:44:13.201552 | orchestrator | Tuesday 03 February 2026 06:43:30 +0000 (0:00:01.206) 0:48:43.555 ****** 2026-02-03 06:44:13.201560 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201568 | orchestrator | 2026-02-03 06:44:13.201588 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:44:13.201596 | orchestrator | Tuesday 03 February 2026 06:43:31 +0000 (0:00:00.877) 0:48:44.433 ****** 2026-02-03 06:44:13.201604 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:13.201621 | orchestrator | 2026-02-03 06:44:13.201629 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:44:13.201638 | orchestrator | Tuesday 03 February 2026 06:43:33 +0000 (0:00:02.192) 0:48:46.626 ****** 2026-02-03 06:44:13.201646 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:13.201653 | orchestrator | 2026-02-03 06:44:13.201661 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:44:13.201669 | orchestrator | Tuesday 03 February 2026 06:43:34 +0000 (0:00:00.845) 0:48:47.471 ****** 2026-02-03 06:44:13.201677 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-03 06:44:13.201685 | orchestrator | 2026-02-03 06:44:13.201707 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:44:13.201716 | orchestrator | Tuesday 03 February 2026 06:43:35 +0000 (0:00:01.400) 0:48:48.872 ****** 2026-02-03 06:44:13.201724 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201732 | orchestrator | 2026-02-03 06:44:13.201740 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:44:13.201747 | orchestrator | Tuesday 03 February 2026 06:43:36 +0000 (0:00:01.193) 0:48:50.066 ****** 2026-02-03 06:44:13.201755 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201763 | orchestrator | 2026-02-03 06:44:13.201771 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:44:13.201779 | orchestrator | Tuesday 03 February 2026 06:43:38 +0000 (0:00:01.257) 0:48:51.323 ****** 2026-02-03 06:44:13.201787 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201795 | orchestrator | 2026-02-03 06:44:13.201803 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:44:13.201844 | orchestrator | Tuesday 03 February 2026 06:43:39 +0000 (0:00:01.176) 0:48:52.499 ****** 2026-02-03 06:44:13.201858 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201872 | orchestrator | 2026-02-03 06:44:13.201887 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:44:13.201895 | orchestrator | Tuesday 03 February 2026 06:43:40 +0000 (0:00:01.210) 0:48:53.710 ****** 2026-02-03 06:44:13.201903 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201914 | orchestrator | 2026-02-03 06:44:13.201923 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:44:13.201932 | orchestrator | Tuesday 03 February 2026 06:43:41 +0000 (0:00:01.265) 0:48:54.975 ****** 2026-02-03 06:44:13.201941 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.201950 | orchestrator | 2026-02-03 06:44:13.201959 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:44:13.201991 | orchestrator | Tuesday 03 February 2026 06:43:42 +0000 (0:00:01.152) 0:48:56.127 ****** 2026-02-03 06:44:13.202001 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202009 | orchestrator | 2026-02-03 06:44:13.202064 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:44:13.202073 | orchestrator | Tuesday 03 February 2026 06:43:44 +0000 (0:00:01.277) 0:48:57.405 ****** 2026-02-03 06:44:13.202083 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202092 | orchestrator | 2026-02-03 06:44:13.202101 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:44:13.202110 | orchestrator | Tuesday 03 February 2026 06:43:45 +0000 (0:00:01.210) 0:48:58.615 ****** 2026-02-03 06:44:13.202119 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:13.202128 | orchestrator | 2026-02-03 06:44:13.202137 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:44:13.202146 | orchestrator | Tuesday 03 February 2026 06:43:46 +0000 (0:00:00.922) 0:48:59.537 ****** 2026-02-03 06:44:13.202154 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-03 06:44:13.202164 | orchestrator | 2026-02-03 06:44:13.202173 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:44:13.202182 | orchestrator | Tuesday 03 February 2026 06:43:47 +0000 (0:00:01.302) 0:49:00.840 ****** 2026-02-03 06:44:13.202191 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-03 06:44:13.202200 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-03 06:44:13.202209 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-03 06:44:13.202218 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-03 06:44:13.202227 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-03 06:44:13.202236 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-03 06:44:13.202244 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-03 06:44:13.202254 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:44:13.202264 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:44:13.202290 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:44:13.202298 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:44:13.202306 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:44:13.202314 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:44:13.202322 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:44:13.202330 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-03 06:44:13.202338 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-03 06:44:13.202346 | orchestrator | 2026-02-03 06:44:13.202354 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:44:13.202362 | orchestrator | Tuesday 03 February 2026 06:43:53 +0000 (0:00:06.303) 0:49:07.143 ****** 2026-02-03 06:44:13.202370 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-03 06:44:13.202378 | orchestrator | 2026-02-03 06:44:13.202385 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-03 06:44:13.202393 | orchestrator | Tuesday 03 February 2026 06:43:55 +0000 (0:00:01.207) 0:49:08.350 ****** 2026-02-03 06:44:13.202401 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 06:44:13.202410 | orchestrator | 2026-02-03 06:44:13.202418 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-03 06:44:13.202426 | orchestrator | Tuesday 03 February 2026 06:43:56 +0000 (0:00:01.596) 0:49:09.947 ****** 2026-02-03 06:44:13.202434 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 06:44:13.202449 | orchestrator | 2026-02-03 06:44:13.202462 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:44:13.202470 | orchestrator | Tuesday 03 February 2026 06:43:58 +0000 (0:00:01.696) 0:49:11.644 ****** 2026-02-03 06:44:13.202478 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202486 | orchestrator | 2026-02-03 06:44:13.202494 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:44:13.202502 | orchestrator | Tuesday 03 February 2026 06:43:59 +0000 (0:00:00.857) 0:49:12.501 ****** 2026-02-03 06:44:13.202510 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202518 | orchestrator | 2026-02-03 06:44:13.202526 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:44:13.202534 | orchestrator | Tuesday 03 February 2026 06:44:00 +0000 (0:00:00.842) 0:49:13.344 ****** 2026-02-03 06:44:13.202542 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202550 | orchestrator | 2026-02-03 06:44:13.202557 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:44:13.202565 | orchestrator | Tuesday 03 February 2026 06:44:00 +0000 (0:00:00.838) 0:49:14.182 ****** 2026-02-03 06:44:13.202573 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202581 | orchestrator | 2026-02-03 06:44:13.202589 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:44:13.202597 | orchestrator | Tuesday 03 February 2026 06:44:01 +0000 (0:00:00.856) 0:49:15.038 ****** 2026-02-03 06:44:13.202605 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202613 | orchestrator | 2026-02-03 06:44:13.202621 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:44:13.202629 | orchestrator | Tuesday 03 February 2026 06:44:02 +0000 (0:00:00.970) 0:49:16.008 ****** 2026-02-03 06:44:13.202637 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202644 | orchestrator | 2026-02-03 06:44:13.202652 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:44:13.202660 | orchestrator | Tuesday 03 February 2026 06:44:03 +0000 (0:00:00.841) 0:49:16.850 ****** 2026-02-03 06:44:13.202668 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202676 | orchestrator | 2026-02-03 06:44:13.202684 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:44:13.202692 | orchestrator | Tuesday 03 February 2026 06:44:04 +0000 (0:00:00.859) 0:49:17.710 ****** 2026-02-03 06:44:13.202700 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202708 | orchestrator | 2026-02-03 06:44:13.202716 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:44:13.202724 | orchestrator | Tuesday 03 February 2026 06:44:05 +0000 (0:00:00.932) 0:49:18.643 ****** 2026-02-03 06:44:13.202732 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202740 | orchestrator | 2026-02-03 06:44:13.202748 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:44:13.202755 | orchestrator | Tuesday 03 February 2026 06:44:06 +0000 (0:00:00.800) 0:49:19.443 ****** 2026-02-03 06:44:13.202763 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:13.202771 | orchestrator | 2026-02-03 06:44:13.202779 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:44:13.202787 | orchestrator | Tuesday 03 February 2026 06:44:07 +0000 (0:00:00.831) 0:49:20.274 ****** 2026-02-03 06:44:13.202795 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:13.202803 | orchestrator | 2026-02-03 06:44:13.202838 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:44:13.202846 | orchestrator | Tuesday 03 February 2026 06:44:07 +0000 (0:00:00.899) 0:49:21.174 ****** 2026-02-03 06:44:13.202854 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-03 06:44:13.202862 | orchestrator | 2026-02-03 06:44:13.202870 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:44:13.202884 | orchestrator | Tuesday 03 February 2026 06:44:12 +0000 (0:00:04.361) 0:49:25.536 ****** 2026-02-03 06:44:13.202897 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 06:44:57.030179 | orchestrator | 2026-02-03 06:44:57.030296 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:44:57.030313 | orchestrator | Tuesday 03 February 2026 06:44:13 +0000 (0:00:00.840) 0:49:26.376 ****** 2026-02-03 06:44:57.030343 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-03 06:44:57.030358 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-03 06:44:57.030370 | orchestrator | 2026-02-03 06:44:57.030382 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:44:57.030393 | orchestrator | Tuesday 03 February 2026 06:44:20 +0000 (0:00:07.777) 0:49:34.154 ****** 2026-02-03 06:44:57.030404 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.030416 | orchestrator | 2026-02-03 06:44:57.030427 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:44:57.030438 | orchestrator | Tuesday 03 February 2026 06:44:21 +0000 (0:00:00.796) 0:49:34.951 ****** 2026-02-03 06:44:57.030449 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.030460 | orchestrator | 2026-02-03 06:44:57.030488 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:44:57.030501 | orchestrator | Tuesday 03 February 2026 06:44:22 +0000 (0:00:00.853) 0:49:35.804 ****** 2026-02-03 06:44:57.030512 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.030522 | orchestrator | 2026-02-03 06:44:57.030533 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:44:57.030544 | orchestrator | Tuesday 03 February 2026 06:44:23 +0000 (0:00:00.848) 0:49:36.653 ****** 2026-02-03 06:44:57.030555 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.030566 | orchestrator | 2026-02-03 06:44:57.030577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:44:57.030588 | orchestrator | Tuesday 03 February 2026 06:44:24 +0000 (0:00:00.826) 0:49:37.480 ****** 2026-02-03 06:44:57.030599 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.030610 | orchestrator | 2026-02-03 06:44:57.030621 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:44:57.030632 | orchestrator | Tuesday 03 February 2026 06:44:25 +0000 (0:00:00.820) 0:49:38.301 ****** 2026-02-03 06:44:57.030643 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:57.030655 | orchestrator | 2026-02-03 06:44:57.030668 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:44:57.030680 | orchestrator | Tuesday 03 February 2026 06:44:26 +0000 (0:00:01.014) 0:49:39.315 ****** 2026-02-03 06:44:57.030694 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 06:44:57.030707 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 06:44:57.030720 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 06:44:57.030733 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.030745 | orchestrator | 2026-02-03 06:44:57.030758 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:44:57.030770 | orchestrator | Tuesday 03 February 2026 06:44:27 +0000 (0:00:01.641) 0:49:40.956 ****** 2026-02-03 06:44:57.030838 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 06:44:57.030853 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 06:44:57.030866 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 06:44:57.030878 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.030889 | orchestrator | 2026-02-03 06:44:57.030900 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:44:57.030911 | orchestrator | Tuesday 03 February 2026 06:44:28 +0000 (0:00:01.152) 0:49:42.109 ****** 2026-02-03 06:44:57.030922 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 06:44:57.030933 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 06:44:57.030944 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 06:44:57.030955 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.030966 | orchestrator | 2026-02-03 06:44:57.030976 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:44:57.030988 | orchestrator | Tuesday 03 February 2026 06:44:30 +0000 (0:00:01.142) 0:49:43.252 ****** 2026-02-03 06:44:57.030999 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:57.031010 | orchestrator | 2026-02-03 06:44:57.031021 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:44:57.031031 | orchestrator | Tuesday 03 February 2026 06:44:30 +0000 (0:00:00.885) 0:49:44.137 ****** 2026-02-03 06:44:57.031042 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 06:44:57.031053 | orchestrator | 2026-02-03 06:44:57.031064 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:44:57.031076 | orchestrator | Tuesday 03 February 2026 06:44:32 +0000 (0:00:01.121) 0:49:45.259 ****** 2026-02-03 06:44:57.031086 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:57.031098 | orchestrator | 2026-02-03 06:44:57.031109 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-03 06:44:57.031120 | orchestrator | Tuesday 03 February 2026 06:44:33 +0000 (0:00:01.455) 0:49:46.715 ****** 2026-02-03 06:44:57.031131 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:57.031142 | orchestrator | 2026-02-03 06:44:57.031169 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-03 06:44:57.031181 | orchestrator | Tuesday 03 February 2026 06:44:34 +0000 (0:00:00.887) 0:49:47.603 ****** 2026-02-03 06:44:57.031192 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:44:57.031204 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:44:57.031214 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:44:57.031225 | orchestrator | 2026-02-03 06:44:57.031236 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-03 06:44:57.031247 | orchestrator | Tuesday 03 February 2026 06:44:36 +0000 (0:00:01.861) 0:49:49.465 ****** 2026-02-03 06:44:57.031258 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-03 06:44:57.031269 | orchestrator | 2026-02-03 06:44:57.031280 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-03 06:44:57.031290 | orchestrator | Tuesday 03 February 2026 06:44:37 +0000 (0:00:01.148) 0:49:50.614 ****** 2026-02-03 06:44:57.031301 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.031312 | orchestrator | 2026-02-03 06:44:57.031323 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-03 06:44:57.031334 | orchestrator | Tuesday 03 February 2026 06:44:38 +0000 (0:00:01.152) 0:49:51.767 ****** 2026-02-03 06:44:57.031344 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.031355 | orchestrator | 2026-02-03 06:44:57.031366 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-03 06:44:57.031377 | orchestrator | Tuesday 03 February 2026 06:44:39 +0000 (0:00:01.272) 0:49:53.039 ****** 2026-02-03 06:44:57.031396 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:57.031407 | orchestrator | 2026-02-03 06:44:57.031423 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-03 06:44:57.031435 | orchestrator | Tuesday 03 February 2026 06:44:41 +0000 (0:00:01.603) 0:49:54.643 ****** 2026-02-03 06:44:57.031446 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:57.031457 | orchestrator | 2026-02-03 06:44:57.031467 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-03 06:44:57.031478 | orchestrator | Tuesday 03 February 2026 06:44:42 +0000 (0:00:01.193) 0:49:55.837 ****** 2026-02-03 06:44:57.031489 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-03 06:44:57.031501 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-03 06:44:57.031511 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-03 06:44:57.031522 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-03 06:44:57.031533 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-03 06:44:57.031544 | orchestrator | 2026-02-03 06:44:57.031555 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-03 06:44:57.031565 | orchestrator | Tuesday 03 February 2026 06:44:45 +0000 (0:00:02.609) 0:49:58.446 ****** 2026-02-03 06:44:57.031576 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.031587 | orchestrator | 2026-02-03 06:44:57.031598 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-03 06:44:57.031609 | orchestrator | Tuesday 03 February 2026 06:44:46 +0000 (0:00:00.816) 0:49:59.263 ****** 2026-02-03 06:44:57.031619 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-03 06:44:57.031630 | orchestrator | 2026-02-03 06:44:57.031641 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-03 06:44:57.031652 | orchestrator | Tuesday 03 February 2026 06:44:47 +0000 (0:00:01.277) 0:50:00.540 ****** 2026-02-03 06:44:57.031663 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-03 06:44:57.031674 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-03 06:44:57.031684 | orchestrator | 2026-02-03 06:44:57.031695 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-03 06:44:57.031706 | orchestrator | Tuesday 03 February 2026 06:44:49 +0000 (0:00:01.984) 0:50:02.525 ****** 2026-02-03 06:44:57.031717 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:44:57.031728 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-03 06:44:57.031739 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 06:44:57.031750 | orchestrator | 2026-02-03 06:44:57.031761 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-03 06:44:57.031772 | orchestrator | Tuesday 03 February 2026 06:44:52 +0000 (0:00:03.381) 0:50:05.906 ****** 2026-02-03 06:44:57.031783 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-03 06:44:57.031794 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-03 06:44:57.031805 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:44:57.031834 | orchestrator | 2026-02-03 06:44:57.031845 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-03 06:44:57.031856 | orchestrator | Tuesday 03 February 2026 06:44:54 +0000 (0:00:01.660) 0:50:07.567 ****** 2026-02-03 06:44:57.031867 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.031878 | orchestrator | 2026-02-03 06:44:57.031889 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-03 06:44:57.031900 | orchestrator | Tuesday 03 February 2026 06:44:55 +0000 (0:00:00.941) 0:50:08.508 ****** 2026-02-03 06:44:57.031911 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.031922 | orchestrator | 2026-02-03 06:44:57.031933 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-03 06:44:57.031951 | orchestrator | Tuesday 03 February 2026 06:44:56 +0000 (0:00:00.903) 0:50:09.412 ****** 2026-02-03 06:44:57.031962 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:44:57.031973 | orchestrator | 2026-02-03 06:44:57.031990 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-03 06:47:24.045716 | orchestrator | Tuesday 03 February 2026 06:44:57 +0000 (0:00:00.789) 0:50:10.201 ****** 2026-02-03 06:47:24.045907 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-03 06:47:24.045932 | orchestrator | 2026-02-03 06:47:24.045952 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-03 06:47:24.045972 | orchestrator | Tuesday 03 February 2026 06:44:58 +0000 (0:00:01.558) 0:50:11.760 ****** 2026-02-03 06:47:24.045990 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:47:24.046012 | orchestrator | 2026-02-03 06:47:24.046117 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-03 06:47:24.046129 | orchestrator | Tuesday 03 February 2026 06:45:00 +0000 (0:00:01.560) 0:50:13.321 ****** 2026-02-03 06:47:24.046140 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:47:24.046152 | orchestrator | 2026-02-03 06:47:24.046163 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-03 06:47:24.046174 | orchestrator | Tuesday 03 February 2026 06:45:03 +0000 (0:00:03.535) 0:50:16.856 ****** 2026-02-03 06:47:24.046185 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-03 06:47:24.046196 | orchestrator | 2026-02-03 06:47:24.046207 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-03 06:47:24.046218 | orchestrator | Tuesday 03 February 2026 06:45:04 +0000 (0:00:01.300) 0:50:18.157 ****** 2026-02-03 06:47:24.046232 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:47:24.046246 | orchestrator | 2026-02-03 06:47:24.046259 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-03 06:47:24.046278 | orchestrator | Tuesday 03 February 2026 06:45:07 +0000 (0:00:02.034) 0:50:20.191 ****** 2026-02-03 06:47:24.046298 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:47:24.046317 | orchestrator | 2026-02-03 06:47:24.046360 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-03 06:47:24.046377 | orchestrator | Tuesday 03 February 2026 06:45:09 +0000 (0:00:02.064) 0:50:22.256 ****** 2026-02-03 06:47:24.046390 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:47:24.046403 | orchestrator | 2026-02-03 06:47:24.046417 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-03 06:47:24.046434 | orchestrator | Tuesday 03 February 2026 06:45:11 +0000 (0:00:02.278) 0:50:24.534 ****** 2026-02-03 06:47:24.046452 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:47:24.046474 | orchestrator | 2026-02-03 06:47:24.046493 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-03 06:47:24.046518 | orchestrator | Tuesday 03 February 2026 06:45:12 +0000 (0:00:01.229) 0:50:25.763 ****** 2026-02-03 06:47:24.046538 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:47:24.046558 | orchestrator | 2026-02-03 06:47:24.046577 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-03 06:47:24.046599 | orchestrator | Tuesday 03 February 2026 06:45:13 +0000 (0:00:01.218) 0:50:26.982 ****** 2026-02-03 06:47:24.046619 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-03 06:47:24.046632 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-02-03 06:47:24.046649 | orchestrator | 2026-02-03 06:47:24.046667 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-03 06:47:24.046686 | orchestrator | Tuesday 03 February 2026 06:45:15 +0000 (0:00:01.904) 0:50:28.887 ****** 2026-02-03 06:47:24.046708 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-03 06:47:24.046721 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-02-03 06:47:24.046740 | orchestrator | 2026-02-03 06:47:24.046760 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-03 06:47:24.046779 | orchestrator | Tuesday 03 February 2026 06:45:18 +0000 (0:00:02.962) 0:50:31.849 ****** 2026-02-03 06:47:24.046854 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-03 06:47:24.046873 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-03 06:47:24.046891 | orchestrator | 2026-02-03 06:47:24.046910 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-03 06:47:24.046929 | orchestrator | Tuesday 03 February 2026 06:45:23 +0000 (0:00:04.602) 0:50:36.452 ****** 2026-02-03 06:47:24.046965 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:47:24.046998 | orchestrator | 2026-02-03 06:47:24.047019 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-03 06:47:24.047037 | orchestrator | Tuesday 03 February 2026 06:45:24 +0000 (0:00:01.509) 0:50:37.961 ****** 2026-02-03 06:47:24.047056 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-03 06:47:24.047069 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:47:24.047079 | orchestrator | 2026-02-03 06:47:24.047091 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-03 06:47:24.047101 | orchestrator | Tuesday 03 February 2026 06:45:37 +0000 (0:00:13.046) 0:50:51.008 ****** 2026-02-03 06:47:24.047112 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:47:24.047123 | orchestrator | 2026-02-03 06:47:24.047134 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-03 06:47:24.047145 | orchestrator | Tuesday 03 February 2026 06:45:38 +0000 (0:00:00.895) 0:50:51.904 ****** 2026-02-03 06:47:24.047156 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:47:24.047166 | orchestrator | 2026-02-03 06:47:24.047177 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-03 06:47:24.047188 | orchestrator | Tuesday 03 February 2026 06:45:39 +0000 (0:00:00.796) 0:50:52.701 ****** 2026-02-03 06:47:24.047199 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:47:24.047209 | orchestrator | 2026-02-03 06:47:24.047220 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-03 06:47:24.047231 | orchestrator | Tuesday 03 February 2026 06:45:40 +0000 (0:00:00.842) 0:50:53.543 ****** 2026-02-03 06:47:24.047242 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-03 06:47:24.047253 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:47:24.047263 | orchestrator | 2026-02-03 06:47:24.047294 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-03 06:47:24.047306 | orchestrator | 2026-02-03 06:47:24.047317 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:47:24.047328 | orchestrator | Tuesday 03 February 2026 06:45:46 +0000 (0:00:05.729) 0:50:59.272 ****** 2026-02-03 06:47:24.047354 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:47:24.047365 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:47:24.047376 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:47:24.047387 | orchestrator | 2026-02-03 06:47:24.047398 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:47:24.047409 | orchestrator | Tuesday 03 February 2026 06:45:47 +0000 (0:00:01.732) 0:51:01.005 ****** 2026-02-03 06:47:24.047420 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:47:24.047431 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:47:24.047441 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:47:24.047452 | orchestrator | 2026-02-03 06:47:24.047463 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-03 06:47:24.047474 | orchestrator | Tuesday 03 February 2026 06:45:49 +0000 (0:00:01.812) 0:51:02.817 ****** 2026-02-03 06:47:24.047485 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-03 06:47:24.047496 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-03 06:47:24.047507 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-03 06:47:24.047529 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-03 06:47:24.047550 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-03 06:47:24.047561 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-03 06:47:24.047573 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-03 06:47:24.047584 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-03 06:47:24.047594 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-03 06:47:24.047605 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-03 06:47:24.047616 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-03 06:47:24.047627 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-03 06:47:24.047637 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-03 06:47:24.047648 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-03 06:47:24.047659 | orchestrator | 2026-02-03 06:47:24.047670 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-03 06:47:24.047680 | orchestrator | Tuesday 03 February 2026 06:47:06 +0000 (0:01:16.781) 0:52:19.599 ****** 2026-02-03 06:47:24.047691 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-03 06:47:24.047702 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-03 06:47:24.047712 | orchestrator | 2026-02-03 06:47:24.047723 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-03 06:47:24.047734 | orchestrator | Tuesday 03 February 2026 06:47:12 +0000 (0:00:05.717) 0:52:25.316 ****** 2026-02-03 06:47:24.047744 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:47:24.047755 | orchestrator | 2026-02-03 06:47:24.047766 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-03 06:47:24.047777 | orchestrator | 2026-02-03 06:47:24.047787 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:47:24.047833 | orchestrator | Tuesday 03 February 2026 06:47:16 +0000 (0:00:03.873) 0:52:29.189 ****** 2026-02-03 06:47:24.047847 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-03 06:47:24.047858 | orchestrator | 2026-02-03 06:47:24.047869 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:47:24.047879 | orchestrator | Tuesday 03 February 2026 06:47:17 +0000 (0:00:01.146) 0:52:30.336 ****** 2026-02-03 06:47:24.047890 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:24.047901 | orchestrator | 2026-02-03 06:47:24.047911 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:47:24.047922 | orchestrator | Tuesday 03 February 2026 06:47:18 +0000 (0:00:01.517) 0:52:31.854 ****** 2026-02-03 06:47:24.047933 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:24.047943 | orchestrator | 2026-02-03 06:47:24.047954 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:47:24.047965 | orchestrator | Tuesday 03 February 2026 06:47:20 +0000 (0:00:01.440) 0:52:33.294 ****** 2026-02-03 06:47:24.047976 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:24.047986 | orchestrator | 2026-02-03 06:47:24.047997 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:47:24.048007 | orchestrator | Tuesday 03 February 2026 06:47:21 +0000 (0:00:01.547) 0:52:34.842 ****** 2026-02-03 06:47:24.048018 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:24.048036 | orchestrator | 2026-02-03 06:47:24.048047 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:47:24.048058 | orchestrator | Tuesday 03 February 2026 06:47:22 +0000 (0:00:01.185) 0:52:36.028 ****** 2026-02-03 06:47:24.048077 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:51.000444 | orchestrator | 2026-02-03 06:47:51.000553 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:47:51.000570 | orchestrator | Tuesday 03 February 2026 06:47:24 +0000 (0:00:01.195) 0:52:37.223 ****** 2026-02-03 06:47:51.000581 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:51.000593 | orchestrator | 2026-02-03 06:47:51.000603 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:47:51.000614 | orchestrator | Tuesday 03 February 2026 06:47:25 +0000 (0:00:01.279) 0:52:38.502 ****** 2026-02-03 06:47:51.000622 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:47:51.000632 | orchestrator | 2026-02-03 06:47:51.000640 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:47:51.000648 | orchestrator | Tuesday 03 February 2026 06:47:26 +0000 (0:00:01.277) 0:52:39.780 ****** 2026-02-03 06:47:51.000656 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:51.000664 | orchestrator | 2026-02-03 06:47:51.000672 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:47:51.000680 | orchestrator | Tuesday 03 February 2026 06:47:27 +0000 (0:00:01.206) 0:52:40.987 ****** 2026-02-03 06:47:51.000688 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:47:51.000696 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:47:51.000704 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:47:51.000712 | orchestrator | 2026-02-03 06:47:51.000720 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:47:51.000728 | orchestrator | Tuesday 03 February 2026 06:47:29 +0000 (0:00:01.915) 0:52:42.903 ****** 2026-02-03 06:47:51.000736 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:51.000744 | orchestrator | 2026-02-03 06:47:51.000767 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:47:51.000776 | orchestrator | Tuesday 03 February 2026 06:47:31 +0000 (0:00:01.376) 0:52:44.279 ****** 2026-02-03 06:47:51.000784 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:47:51.000792 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:47:51.000850 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:47:51.000865 | orchestrator | 2026-02-03 06:47:51.000879 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:47:51.000893 | orchestrator | Tuesday 03 February 2026 06:47:34 +0000 (0:00:03.553) 0:52:47.833 ****** 2026-02-03 06:47:51.000906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 06:47:51.000920 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 06:47:51.000933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 06:47:51.000947 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:47:51.000962 | orchestrator | 2026-02-03 06:47:51.000978 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:47:51.000993 | orchestrator | Tuesday 03 February 2026 06:47:36 +0000 (0:00:01.610) 0:52:49.444 ****** 2026-02-03 06:47:51.001009 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:47:51.001026 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:47:51.001071 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:47:51.001088 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:47:51.001102 | orchestrator | 2026-02-03 06:47:51.001117 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:47:51.001132 | orchestrator | Tuesday 03 February 2026 06:47:38 +0000 (0:00:02.166) 0:52:51.610 ****** 2026-02-03 06:47:51.001149 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:47:51.001167 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:47:51.001204 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:47:51.001220 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:47:51.001233 | orchestrator | 2026-02-03 06:47:51.001248 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:47:51.001263 | orchestrator | Tuesday 03 February 2026 06:47:39 +0000 (0:00:01.251) 0:52:52.862 ****** 2026-02-03 06:47:51.001279 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:47:31.700996', 'end': '2026-02-03 06:47:31.743874', 'delta': '0:00:00.042878', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:47:51.001305 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:47:32.293765', 'end': '2026-02-03 06:47:32.348840', 'delta': '0:00:00.055075', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:47:51.001320 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:47:33.350383', 'end': '2026-02-03 06:47:33.398928', 'delta': '0:00:00.048545', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:47:51.001344 | orchestrator | 2026-02-03 06:47:51.001359 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:47:51.001373 | orchestrator | Tuesday 03 February 2026 06:47:41 +0000 (0:00:01.451) 0:52:54.314 ****** 2026-02-03 06:47:51.001389 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:51.001403 | orchestrator | 2026-02-03 06:47:51.001417 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:47:51.001431 | orchestrator | Tuesday 03 February 2026 06:47:42 +0000 (0:00:01.351) 0:52:55.665 ****** 2026-02-03 06:47:51.001445 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:47:51.001460 | orchestrator | 2026-02-03 06:47:51.001474 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:47:51.001488 | orchestrator | Tuesday 03 February 2026 06:47:43 +0000 (0:00:01.386) 0:52:57.051 ****** 2026-02-03 06:47:51.001501 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:51.001515 | orchestrator | 2026-02-03 06:47:51.001530 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:47:51.001545 | orchestrator | Tuesday 03 February 2026 06:47:45 +0000 (0:00:01.239) 0:52:58.291 ****** 2026-02-03 06:47:51.001559 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:51.001573 | orchestrator | 2026-02-03 06:47:51.001587 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:47:51.001601 | orchestrator | Tuesday 03 February 2026 06:47:47 +0000 (0:00:02.084) 0:53:00.375 ****** 2026-02-03 06:47:51.001616 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:47:51.001631 | orchestrator | 2026-02-03 06:47:51.001645 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:47:51.001660 | orchestrator | Tuesday 03 February 2026 06:47:48 +0000 (0:00:01.247) 0:53:01.623 ****** 2026-02-03 06:47:51.001674 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:47:51.001688 | orchestrator | 2026-02-03 06:47:51.001701 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:47:51.001716 | orchestrator | Tuesday 03 February 2026 06:47:49 +0000 (0:00:01.170) 0:53:02.793 ****** 2026-02-03 06:47:51.001738 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:02.630129 | orchestrator | 2026-02-03 06:48:02.630250 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:48:02.630270 | orchestrator | Tuesday 03 February 2026 06:47:50 +0000 (0:00:01.380) 0:53:04.174 ****** 2026-02-03 06:48:02.630284 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:02.630297 | orchestrator | 2026-02-03 06:48:02.630308 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:48:02.630320 | orchestrator | Tuesday 03 February 2026 06:47:52 +0000 (0:00:01.201) 0:53:05.376 ****** 2026-02-03 06:48:02.630333 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:02.630345 | orchestrator | 2026-02-03 06:48:02.630356 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:48:02.630369 | orchestrator | Tuesday 03 February 2026 06:47:53 +0000 (0:00:01.317) 0:53:06.694 ****** 2026-02-03 06:48:02.630383 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:02.630396 | orchestrator | 2026-02-03 06:48:02.630409 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:48:02.630423 | orchestrator | Tuesday 03 February 2026 06:47:54 +0000 (0:00:01.234) 0:53:07.928 ****** 2026-02-03 06:48:02.630437 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:02.630450 | orchestrator | 2026-02-03 06:48:02.630464 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:48:02.630507 | orchestrator | Tuesday 03 February 2026 06:47:56 +0000 (0:00:01.272) 0:53:09.201 ****** 2026-02-03 06:48:02.630518 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:02.630526 | orchestrator | 2026-02-03 06:48:02.630533 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:48:02.630541 | orchestrator | Tuesday 03 February 2026 06:47:57 +0000 (0:00:01.248) 0:53:10.449 ****** 2026-02-03 06:48:02.630548 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:02.630555 | orchestrator | 2026-02-03 06:48:02.630575 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:48:02.630584 | orchestrator | Tuesday 03 February 2026 06:47:58 +0000 (0:00:01.190) 0:53:11.640 ****** 2026-02-03 06:48:02.630591 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:02.630598 | orchestrator | 2026-02-03 06:48:02.630608 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:48:02.630616 | orchestrator | Tuesday 03 February 2026 06:47:59 +0000 (0:00:01.348) 0:53:12.989 ****** 2026-02-03 06:48:02.630627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:48:02.630640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:48:02.630650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:48:02.630661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:48:02.630673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:48:02.630700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:48:02.630710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:48:02.630734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8b2ebf21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:48:02.630746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:48:02.630755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:48:02.630763 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:02.630772 | orchestrator | 2026-02-03 06:48:02.630781 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:48:02.630790 | orchestrator | Tuesday 03 February 2026 06:48:01 +0000 (0:00:01.409) 0:53:14.399 ****** 2026-02-03 06:48:02.630843 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028612 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028755 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028769 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028778 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028786 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8b2ebf21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b2ebf21-73a8-4948-82f3-6debf7de46ad-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028912 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028920 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:48:07.028927 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:48:07.028936 | orchestrator | 2026-02-03 06:48:07.028944 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:48:07.028959 | orchestrator | Tuesday 03 February 2026 06:48:02 +0000 (0:00:01.408) 0:53:15.808 ****** 2026-02-03 06:48:07.028966 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:48:07.028975 | orchestrator | 2026-02-03 06:48:07.028982 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:48:07.028989 | orchestrator | Tuesday 03 February 2026 06:48:04 +0000 (0:00:01.575) 0:53:17.383 ****** 2026-02-03 06:48:07.028996 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:48:07.029002 | orchestrator | 2026-02-03 06:48:07.029009 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:48:07.029016 | orchestrator | Tuesday 03 February 2026 06:48:05 +0000 (0:00:01.259) 0:53:18.643 ****** 2026-02-03 06:48:07.029023 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:48:07.029030 | orchestrator | 2026-02-03 06:48:07.029037 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:48:07.029048 | orchestrator | Tuesday 03 February 2026 06:48:07 +0000 (0:00:01.561) 0:53:20.204 ****** 2026-02-03 06:49:05.731989 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:49:05.732112 | orchestrator | 2026-02-03 06:49:05.732130 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:49:05.732143 | orchestrator | Tuesday 03 February 2026 06:48:08 +0000 (0:00:01.209) 0:53:21.414 ****** 2026-02-03 06:49:05.732155 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:49:05.732166 | orchestrator | 2026-02-03 06:49:05.732177 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:49:05.732188 | orchestrator | Tuesday 03 February 2026 06:48:09 +0000 (0:00:01.274) 0:53:22.688 ****** 2026-02-03 06:49:05.732199 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:49:05.732210 | orchestrator | 2026-02-03 06:49:05.732220 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:49:05.732231 | orchestrator | Tuesday 03 February 2026 06:48:10 +0000 (0:00:01.192) 0:53:23.881 ****** 2026-02-03 06:49:05.732243 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:49:05.732254 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-03 06:49:05.732265 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-03 06:49:05.732276 | orchestrator | 2026-02-03 06:49:05.732287 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:49:05.732321 | orchestrator | Tuesday 03 February 2026 06:48:12 +0000 (0:00:02.152) 0:53:26.034 ****** 2026-02-03 06:49:05.732338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-03 06:49:05.732357 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-03 06:49:05.732375 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-03 06:49:05.732393 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:49:05.732410 | orchestrator | 2026-02-03 06:49:05.732428 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:49:05.732446 | orchestrator | Tuesday 03 February 2026 06:48:14 +0000 (0:00:01.285) 0:53:27.319 ****** 2026-02-03 06:49:05.732464 | orchestrator | skipping: [testbed-node-0] 2026-02-03 06:49:05.732482 | orchestrator | 2026-02-03 06:49:05.732499 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:49:05.732520 | orchestrator | Tuesday 03 February 2026 06:48:15 +0000 (0:00:01.243) 0:53:28.563 ****** 2026-02-03 06:49:05.732541 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:49:05.732563 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:49:05.732584 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:49:05.732606 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:49:05.732627 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:49:05.732648 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:49:05.732704 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:49:05.732726 | orchestrator | 2026-02-03 06:49:05.732747 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:49:05.732769 | orchestrator | Tuesday 03 February 2026 06:48:17 +0000 (0:00:02.594) 0:53:31.157 ****** 2026-02-03 06:49:05.732790 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-03 06:49:05.732839 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:49:05.732863 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:49:05.732883 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:49:05.732900 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:49:05.732918 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:49:05.732936 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:49:05.732954 | orchestrator | 2026-02-03 06:49:05.732971 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-03 06:49:05.732990 | orchestrator | Tuesday 03 February 2026 06:48:21 +0000 (0:00:03.320) 0:53:34.477 ****** 2026-02-03 06:49:05.733009 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:49:05.733027 | orchestrator | 2026-02-03 06:49:05.733045 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-03 06:49:05.733063 | orchestrator | Tuesday 03 February 2026 06:48:24 +0000 (0:00:03.246) 0:53:37.724 ****** 2026-02-03 06:49:05.733082 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:49:05.733099 | orchestrator | 2026-02-03 06:49:05.733117 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-03 06:49:05.733135 | orchestrator | Tuesday 03 February 2026 06:48:27 +0000 (0:00:03.138) 0:53:40.862 ****** 2026-02-03 06:49:05.733151 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:49:05.733167 | orchestrator | 2026-02-03 06:49:05.733184 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-03 06:49:05.733201 | orchestrator | Tuesday 03 February 2026 06:48:29 +0000 (0:00:02.226) 0:53:43.089 ****** 2026-02-03 06:49:05.733252 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4779', 'value': {'gid': 4779, 'name': 'testbed-node-3', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.13:6817/1536047636', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.13:6816', 'nonce': 1536047636}, {'type': 'v1', 'addr': '192.168.16.13:6817', 'nonce': 1536047636}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-03 06:49:05.733274 | orchestrator | 2026-02-03 06:49:05.733291 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-03 06:49:05.733307 | orchestrator | Tuesday 03 February 2026 06:48:31 +0000 (0:00:01.275) 0:53:44.364 ****** 2026-02-03 06:49:05.733325 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-3) 2026-02-03 06:49:05.733342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-03 06:49:05.733358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-03 06:49:05.733375 | orchestrator | 2026-02-03 06:49:05.733392 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-03 06:49:05.733420 | orchestrator | Tuesday 03 February 2026 06:48:32 +0000 (0:00:01.693) 0:53:46.058 ****** 2026-02-03 06:49:05.733436 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-02-03 06:49:05.733467 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-02-03 06:49:05.733484 | orchestrator | 2026-02-03 06:49:05.733502 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-03 06:49:05.733520 | orchestrator | Tuesday 03 February 2026 06:48:34 +0000 (0:00:01.572) 0:53:47.631 ****** 2026-02-03 06:49:05.733538 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:49:05.733555 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:49:05.733572 | orchestrator | 2026-02-03 06:49:05.733590 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-03 06:49:05.733607 | orchestrator | Tuesday 03 February 2026 06:48:45 +0000 (0:00:11.376) 0:53:59.008 ****** 2026-02-03 06:49:05.733625 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:49:05.733643 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:49:05.733662 | orchestrator | 2026-02-03 06:49:05.733680 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-03 06:49:05.733698 | orchestrator | Tuesday 03 February 2026 06:48:49 +0000 (0:00:04.048) 0:54:03.056 ****** 2026-02-03 06:49:05.733717 | orchestrator | ok: [testbed-node-0] 2026-02-03 06:49:05.733736 | orchestrator | 2026-02-03 06:49:05.733756 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-03 06:49:05.733774 | orchestrator | Tuesday 03 February 2026 06:48:52 +0000 (0:00:02.300) 0:54:05.357 ****** 2026-02-03 06:49:05.733820 | orchestrator | changed: [testbed-node-0] 2026-02-03 06:49:05.733834 | orchestrator | 2026-02-03 06:49:05.733845 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-03 06:49:05.733856 | orchestrator | 2026-02-03 06:49:05.733866 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:49:05.733877 | orchestrator | Tuesday 03 February 2026 06:48:53 +0000 (0:00:01.653) 0:54:07.010 ****** 2026-02-03 06:49:05.733888 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-03 06:49:05.733898 | orchestrator | 2026-02-03 06:49:05.733909 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:49:05.733920 | orchestrator | Tuesday 03 February 2026 06:48:55 +0000 (0:00:01.489) 0:54:08.499 ****** 2026-02-03 06:49:05.733931 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:05.733942 | orchestrator | 2026-02-03 06:49:05.733953 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:49:05.733963 | orchestrator | Tuesday 03 February 2026 06:48:56 +0000 (0:00:01.531) 0:54:10.031 ****** 2026-02-03 06:49:05.733974 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:05.733985 | orchestrator | 2026-02-03 06:49:05.733996 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:49:05.734006 | orchestrator | Tuesday 03 February 2026 06:48:58 +0000 (0:00:01.205) 0:54:11.237 ****** 2026-02-03 06:49:05.734082 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:05.734094 | orchestrator | 2026-02-03 06:49:05.734104 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:49:05.734115 | orchestrator | Tuesday 03 February 2026 06:48:59 +0000 (0:00:01.560) 0:54:12.797 ****** 2026-02-03 06:49:05.734126 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:05.734137 | orchestrator | 2026-02-03 06:49:05.734148 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:49:05.734158 | orchestrator | Tuesday 03 February 2026 06:49:00 +0000 (0:00:01.273) 0:54:14.071 ****** 2026-02-03 06:49:05.734169 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:05.734180 | orchestrator | 2026-02-03 06:49:05.734190 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:49:05.734201 | orchestrator | Tuesday 03 February 2026 06:49:02 +0000 (0:00:01.237) 0:54:15.309 ****** 2026-02-03 06:49:05.734224 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:05.734234 | orchestrator | 2026-02-03 06:49:05.734245 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:49:05.734256 | orchestrator | Tuesday 03 February 2026 06:49:03 +0000 (0:00:01.188) 0:54:16.497 ****** 2026-02-03 06:49:05.734266 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:05.734277 | orchestrator | 2026-02-03 06:49:05.734288 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:49:05.734298 | orchestrator | Tuesday 03 February 2026 06:49:04 +0000 (0:00:01.167) 0:54:17.665 ****** 2026-02-03 06:49:05.734309 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:05.734320 | orchestrator | 2026-02-03 06:49:05.734345 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:49:32.592145 | orchestrator | Tuesday 03 February 2026 06:49:05 +0000 (0:00:01.240) 0:54:18.905 ****** 2026-02-03 06:49:32.592255 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:49:32.592275 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:49:32.592286 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:49:32.592298 | orchestrator | 2026-02-03 06:49:32.592310 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:49:32.592321 | orchestrator | Tuesday 03 February 2026 06:49:07 +0000 (0:00:02.206) 0:54:21.111 ****** 2026-02-03 06:49:32.592332 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:32.592345 | orchestrator | 2026-02-03 06:49:32.592356 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:49:32.592367 | orchestrator | Tuesday 03 February 2026 06:49:09 +0000 (0:00:01.390) 0:54:22.502 ****** 2026-02-03 06:49:32.592378 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:49:32.592408 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:49:32.592420 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:49:32.592431 | orchestrator | 2026-02-03 06:49:32.592443 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:49:32.592455 | orchestrator | Tuesday 03 February 2026 06:49:12 +0000 (0:00:03.416) 0:54:25.919 ****** 2026-02-03 06:49:32.592467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 06:49:32.592480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 06:49:32.592491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 06:49:32.592502 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:32.592514 | orchestrator | 2026-02-03 06:49:32.592525 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:49:32.592536 | orchestrator | Tuesday 03 February 2026 06:49:14 +0000 (0:00:02.067) 0:54:27.986 ****** 2026-02-03 06:49:32.592550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:49:32.592565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:49:32.592578 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:49:32.592590 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:32.592602 | orchestrator | 2026-02-03 06:49:32.592614 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:49:32.592654 | orchestrator | Tuesday 03 February 2026 06:49:16 +0000 (0:00:01.844) 0:54:29.831 ****** 2026-02-03 06:49:32.592669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:32.592684 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:32.592696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:32.592708 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:32.592720 | orchestrator | 2026-02-03 06:49:32.592732 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:49:32.592744 | orchestrator | Tuesday 03 February 2026 06:49:17 +0000 (0:00:01.233) 0:54:31.065 ****** 2026-02-03 06:49:32.592781 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:49:10.260155', 'end': '2026-02-03 06:49:10.306914', 'delta': '0:00:00.046759', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:49:32.592837 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:49:10.845080', 'end': '2026-02-03 06:49:10.905221', 'delta': '0:00:00.060141', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:49:32.592853 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:49:11.471797', 'end': '2026-02-03 06:49:11.521823', 'delta': '0:00:00.050026', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:49:32.592879 | orchestrator | 2026-02-03 06:49:32.592893 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:49:32.592907 | orchestrator | Tuesday 03 February 2026 06:49:19 +0000 (0:00:01.259) 0:54:32.324 ****** 2026-02-03 06:49:32.592919 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:32.592934 | orchestrator | 2026-02-03 06:49:32.592946 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:49:32.592959 | orchestrator | Tuesday 03 February 2026 06:49:20 +0000 (0:00:01.396) 0:54:33.721 ****** 2026-02-03 06:49:32.592972 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:32.592986 | orchestrator | 2026-02-03 06:49:32.592999 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:49:32.593012 | orchestrator | Tuesday 03 February 2026 06:49:21 +0000 (0:00:01.304) 0:54:35.025 ****** 2026-02-03 06:49:32.593024 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:32.593037 | orchestrator | 2026-02-03 06:49:32.593051 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:49:32.593063 | orchestrator | Tuesday 03 February 2026 06:49:23 +0000 (0:00:01.210) 0:54:36.236 ****** 2026-02-03 06:49:32.593074 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:49:32.593087 | orchestrator | 2026-02-03 06:49:32.593099 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:49:32.593112 | orchestrator | Tuesday 03 February 2026 06:49:25 +0000 (0:00:02.060) 0:54:38.296 ****** 2026-02-03 06:49:32.593125 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:32.593137 | orchestrator | 2026-02-03 06:49:32.593150 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:49:32.593163 | orchestrator | Tuesday 03 February 2026 06:49:26 +0000 (0:00:01.228) 0:54:39.525 ****** 2026-02-03 06:49:32.593175 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:32.593188 | orchestrator | 2026-02-03 06:49:32.593201 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:49:32.593215 | orchestrator | Tuesday 03 February 2026 06:49:27 +0000 (0:00:01.175) 0:54:40.701 ****** 2026-02-03 06:49:32.593228 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:32.593240 | orchestrator | 2026-02-03 06:49:32.593253 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:49:32.593266 | orchestrator | Tuesday 03 February 2026 06:49:28 +0000 (0:00:01.347) 0:54:42.048 ****** 2026-02-03 06:49:32.593279 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:32.593297 | orchestrator | 2026-02-03 06:49:32.593309 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:49:32.593322 | orchestrator | Tuesday 03 February 2026 06:49:30 +0000 (0:00:01.150) 0:54:43.199 ****** 2026-02-03 06:49:32.593336 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:32.593347 | orchestrator | 2026-02-03 06:49:32.593383 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:49:32.593399 | orchestrator | Tuesday 03 February 2026 06:49:31 +0000 (0:00:01.236) 0:54:44.436 ****** 2026-02-03 06:49:32.593439 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:37.814456 | orchestrator | 2026-02-03 06:49:37.814569 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:49:37.814581 | orchestrator | Tuesday 03 February 2026 06:49:32 +0000 (0:00:01.331) 0:54:45.767 ****** 2026-02-03 06:49:37.814588 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:37.814598 | orchestrator | 2026-02-03 06:49:37.814635 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:49:37.814644 | orchestrator | Tuesday 03 February 2026 06:49:33 +0000 (0:00:01.165) 0:54:46.933 ****** 2026-02-03 06:49:37.814651 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:37.814659 | orchestrator | 2026-02-03 06:49:37.814665 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:49:37.814671 | orchestrator | Tuesday 03 February 2026 06:49:34 +0000 (0:00:01.190) 0:54:48.124 ****** 2026-02-03 06:49:37.814697 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:37.814704 | orchestrator | 2026-02-03 06:49:37.814710 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:49:37.814718 | orchestrator | Tuesday 03 February 2026 06:49:36 +0000 (0:00:01.264) 0:54:49.388 ****** 2026-02-03 06:49:37.814724 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:49:37.814743 | orchestrator | 2026-02-03 06:49:37.814750 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:49:37.814756 | orchestrator | Tuesday 03 February 2026 06:49:37 +0000 (0:00:01.247) 0:54:50.637 ****** 2026-02-03 06:49:37.814764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:49:37.814774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'uuids': ['027247ae-00a3-443e-9633-8d8391a7da1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T']}})  2026-02-03 06:49:37.814784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '30942d1f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:49:37.814820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29']}})  2026-02-03 06:49:37.814828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:49:37.814852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:49:37.814875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:49:37.814882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:49:37.814889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp', 'dm-uuid-CRYPT-LUKS2-51cdba44ba2f44e4a9ba680ba42622f2-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:49:37.814895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:49:37.814902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'uuids': ['51cdba44-ba2f-44e4-a9ba-680ba42622f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp']}})  2026-02-03 06:49:37.814909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd']}})  2026-02-03 06:49:37.814921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:49:39.212461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26fa6d1d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:49:39.212570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:49:39.212588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:49:39.212601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T', 'dm-uuid-CRYPT-LUKS2-027247ae00a3443e96338d8391a7da1a-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:49:39.212615 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:49:39.212627 | orchestrator | 2026-02-03 06:49:39.212639 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:49:39.212674 | orchestrator | Tuesday 03 February 2026 06:49:38 +0000 (0:00:01.519) 0:54:52.156 ****** 2026-02-03 06:49:39.212706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:39.212727 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'uuids': ['027247ae-00a3-443e-9633-8d8391a7da1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:39.212741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '30942d1f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:39.212754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:39.212767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:39.212836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511445 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp', 'dm-uuid-CRYPT-LUKS2-51cdba44ba2f44e4a9ba680ba42622f2-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'uuids': ['51cdba44-ba2f-44e4-a9ba-680ba42622f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511533 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26fa6d1d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511546 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:49:40.511561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:50:18.038293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T', 'dm-uuid-CRYPT-LUKS2-027247ae00a3443e96338d8391a7da1a-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:50:18.038420 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.038440 | orchestrator | 2026-02-03 06:50:18.038453 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:50:18.038466 | orchestrator | Tuesday 03 February 2026 06:49:40 +0000 (0:00:01.529) 0:54:53.686 ****** 2026-02-03 06:50:18.038477 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:50:18.038489 | orchestrator | 2026-02-03 06:50:18.038500 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:50:18.038511 | orchestrator | Tuesday 03 February 2026 06:49:42 +0000 (0:00:01.560) 0:54:55.246 ****** 2026-02-03 06:50:18.038522 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:50:18.038532 | orchestrator | 2026-02-03 06:50:18.038543 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:50:18.038554 | orchestrator | Tuesday 03 February 2026 06:49:43 +0000 (0:00:01.208) 0:54:56.455 ****** 2026-02-03 06:50:18.038565 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:50:18.038576 | orchestrator | 2026-02-03 06:50:18.038587 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:50:18.038598 | orchestrator | Tuesday 03 February 2026 06:49:44 +0000 (0:00:01.543) 0:54:57.999 ****** 2026-02-03 06:50:18.038609 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.038620 | orchestrator | 2026-02-03 06:50:18.038630 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:50:18.038641 | orchestrator | Tuesday 03 February 2026 06:49:46 +0000 (0:00:01.287) 0:54:59.287 ****** 2026-02-03 06:50:18.038652 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.038663 | orchestrator | 2026-02-03 06:50:18.038674 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:50:18.038716 | orchestrator | Tuesday 03 February 2026 06:49:47 +0000 (0:00:01.319) 0:55:00.606 ****** 2026-02-03 06:50:18.038728 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.038739 | orchestrator | 2026-02-03 06:50:18.038750 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:50:18.038761 | orchestrator | Tuesday 03 February 2026 06:49:48 +0000 (0:00:01.291) 0:55:01.898 ****** 2026-02-03 06:50:18.038772 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-03 06:50:18.038783 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-03 06:50:18.038826 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-03 06:50:18.038839 | orchestrator | 2026-02-03 06:50:18.038850 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:50:18.038861 | orchestrator | Tuesday 03 February 2026 06:49:51 +0000 (0:00:02.468) 0:55:04.366 ****** 2026-02-03 06:50:18.038872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 06:50:18.038883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 06:50:18.038894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 06:50:18.038905 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.038916 | orchestrator | 2026-02-03 06:50:18.038926 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:50:18.038937 | orchestrator | Tuesday 03 February 2026 06:49:52 +0000 (0:00:01.224) 0:55:05.590 ****** 2026-02-03 06:50:18.038961 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-03 06:50:18.038974 | orchestrator | 2026-02-03 06:50:18.038985 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:50:18.038997 | orchestrator | Tuesday 03 February 2026 06:49:53 +0000 (0:00:01.161) 0:55:06.752 ****** 2026-02-03 06:50:18.039008 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.039018 | orchestrator | 2026-02-03 06:50:18.039029 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:50:18.039040 | orchestrator | Tuesday 03 February 2026 06:49:54 +0000 (0:00:01.321) 0:55:08.074 ****** 2026-02-03 06:50:18.039051 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.039061 | orchestrator | 2026-02-03 06:50:18.039072 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:50:18.039083 | orchestrator | Tuesday 03 February 2026 06:49:56 +0000 (0:00:01.277) 0:55:09.351 ****** 2026-02-03 06:50:18.039094 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.039105 | orchestrator | 2026-02-03 06:50:18.039115 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:50:18.039126 | orchestrator | Tuesday 03 February 2026 06:49:57 +0000 (0:00:01.207) 0:55:10.559 ****** 2026-02-03 06:50:18.039137 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:50:18.039147 | orchestrator | 2026-02-03 06:50:18.039158 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:50:18.039169 | orchestrator | Tuesday 03 February 2026 06:49:58 +0000 (0:00:01.270) 0:55:11.830 ****** 2026-02-03 06:50:18.039197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:50:18.039228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:50:18.039240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:50:18.039251 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.039262 | orchestrator | 2026-02-03 06:50:18.039273 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:50:18.039284 | orchestrator | Tuesday 03 February 2026 06:50:00 +0000 (0:00:01.494) 0:55:13.325 ****** 2026-02-03 06:50:18.039294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:50:18.039305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:50:18.039316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:50:18.039335 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.039346 | orchestrator | 2026-02-03 06:50:18.039357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:50:18.039368 | orchestrator | Tuesday 03 February 2026 06:50:01 +0000 (0:00:01.570) 0:55:14.895 ****** 2026-02-03 06:50:18.039379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:50:18.039390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:50:18.039400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:50:18.039411 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.039422 | orchestrator | 2026-02-03 06:50:18.039433 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:50:18.039444 | orchestrator | Tuesday 03 February 2026 06:50:03 +0000 (0:00:01.506) 0:55:16.402 ****** 2026-02-03 06:50:18.039454 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:50:18.039466 | orchestrator | 2026-02-03 06:50:18.039476 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:50:18.039487 | orchestrator | Tuesday 03 February 2026 06:50:04 +0000 (0:00:01.178) 0:55:17.581 ****** 2026-02-03 06:50:18.039498 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 06:50:18.039509 | orchestrator | 2026-02-03 06:50:18.039520 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:50:18.039531 | orchestrator | Tuesday 03 February 2026 06:50:06 +0000 (0:00:01.879) 0:55:19.460 ****** 2026-02-03 06:50:18.039542 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:50:18.039552 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:50:18.039563 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:50:18.039574 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 06:50:18.039584 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:50:18.039595 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:50:18.039606 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:50:18.039617 | orchestrator | 2026-02-03 06:50:18.039628 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:50:18.039639 | orchestrator | Tuesday 03 February 2026 06:50:08 +0000 (0:00:02.526) 0:55:21.987 ****** 2026-02-03 06:50:18.039649 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:50:18.039660 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:50:18.039671 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:50:18.039682 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 06:50:18.039693 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:50:18.039703 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:50:18.039714 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:50:18.039724 | orchestrator | 2026-02-03 06:50:18.039735 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-03 06:50:18.039746 | orchestrator | Tuesday 03 February 2026 06:50:11 +0000 (0:00:02.848) 0:55:24.835 ****** 2026-02-03 06:50:18.039757 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.039768 | orchestrator | 2026-02-03 06:50:18.039778 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:50:18.039789 | orchestrator | Tuesday 03 February 2026 06:50:12 +0000 (0:00:01.173) 0:55:26.008 ****** 2026-02-03 06:50:18.039852 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-03 06:50:18.039880 | orchestrator | 2026-02-03 06:50:18.039893 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:50:18.039904 | orchestrator | Tuesday 03 February 2026 06:50:14 +0000 (0:00:01.206) 0:55:27.215 ****** 2026-02-03 06:50:18.039915 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-03 06:50:18.039926 | orchestrator | 2026-02-03 06:50:18.039937 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:50:18.039947 | orchestrator | Tuesday 03 February 2026 06:50:15 +0000 (0:00:01.178) 0:55:28.393 ****** 2026-02-03 06:50:18.039958 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:50:18.039969 | orchestrator | 2026-02-03 06:50:18.039979 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:50:18.039990 | orchestrator | Tuesday 03 February 2026 06:50:16 +0000 (0:00:01.230) 0:55:29.624 ****** 2026-02-03 06:50:18.040001 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:50:18.040012 | orchestrator | 2026-02-03 06:50:18.040029 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:50:18.040048 | orchestrator | Tuesday 03 February 2026 06:50:18 +0000 (0:00:01.585) 0:55:31.209 ****** 2026-02-03 06:51:12.439582 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.439738 | orchestrator | 2026-02-03 06:51:12.439756 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:51:12.439770 | orchestrator | Tuesday 03 February 2026 06:50:19 +0000 (0:00:01.659) 0:55:32.869 ****** 2026-02-03 06:51:12.439782 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.439860 | orchestrator | 2026-02-03 06:51:12.439874 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:51:12.439885 | orchestrator | Tuesday 03 February 2026 06:50:21 +0000 (0:00:01.645) 0:55:34.514 ****** 2026-02-03 06:51:12.439897 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.439909 | orchestrator | 2026-02-03 06:51:12.439921 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:51:12.439932 | orchestrator | Tuesday 03 February 2026 06:50:22 +0000 (0:00:01.206) 0:55:35.721 ****** 2026-02-03 06:51:12.439943 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.439954 | orchestrator | 2026-02-03 06:51:12.439966 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:51:12.439986 | orchestrator | Tuesday 03 February 2026 06:50:23 +0000 (0:00:01.326) 0:55:37.048 ****** 2026-02-03 06:51:12.440006 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440024 | orchestrator | 2026-02-03 06:51:12.440046 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:51:12.440066 | orchestrator | Tuesday 03 February 2026 06:50:25 +0000 (0:00:01.376) 0:55:38.424 ****** 2026-02-03 06:51:12.440085 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.440099 | orchestrator | 2026-02-03 06:51:12.440112 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:51:12.440125 | orchestrator | Tuesday 03 February 2026 06:50:26 +0000 (0:00:01.643) 0:55:40.068 ****** 2026-02-03 06:51:12.440137 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.440150 | orchestrator | 2026-02-03 06:51:12.440163 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:51:12.440175 | orchestrator | Tuesday 03 February 2026 06:50:28 +0000 (0:00:01.763) 0:55:41.832 ****** 2026-02-03 06:51:12.440188 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440200 | orchestrator | 2026-02-03 06:51:12.440213 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:51:12.440226 | orchestrator | Tuesday 03 February 2026 06:50:29 +0000 (0:00:01.232) 0:55:43.064 ****** 2026-02-03 06:51:12.440238 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440251 | orchestrator | 2026-02-03 06:51:12.440263 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:51:12.440275 | orchestrator | Tuesday 03 February 2026 06:50:31 +0000 (0:00:01.186) 0:55:44.250 ****** 2026-02-03 06:51:12.440321 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.440334 | orchestrator | 2026-02-03 06:51:12.440347 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:51:12.440360 | orchestrator | Tuesday 03 February 2026 06:50:32 +0000 (0:00:01.213) 0:55:45.464 ****** 2026-02-03 06:51:12.440372 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.440384 | orchestrator | 2026-02-03 06:51:12.440398 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:51:12.440411 | orchestrator | Tuesday 03 February 2026 06:50:33 +0000 (0:00:01.234) 0:55:46.698 ****** 2026-02-03 06:51:12.440422 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.440433 | orchestrator | 2026-02-03 06:51:12.440444 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:51:12.440455 | orchestrator | Tuesday 03 February 2026 06:50:34 +0000 (0:00:01.173) 0:55:47.872 ****** 2026-02-03 06:51:12.440466 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440477 | orchestrator | 2026-02-03 06:51:12.440487 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:51:12.440498 | orchestrator | Tuesday 03 February 2026 06:50:35 +0000 (0:00:01.225) 0:55:49.098 ****** 2026-02-03 06:51:12.440509 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440520 | orchestrator | 2026-02-03 06:51:12.440531 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:51:12.440542 | orchestrator | Tuesday 03 February 2026 06:50:37 +0000 (0:00:01.198) 0:55:50.297 ****** 2026-02-03 06:51:12.440553 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440563 | orchestrator | 2026-02-03 06:51:12.440574 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:51:12.440585 | orchestrator | Tuesday 03 February 2026 06:50:38 +0000 (0:00:01.169) 0:55:51.466 ****** 2026-02-03 06:51:12.440596 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.440607 | orchestrator | 2026-02-03 06:51:12.440617 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:51:12.440628 | orchestrator | Tuesday 03 February 2026 06:50:39 +0000 (0:00:01.201) 0:55:52.667 ****** 2026-02-03 06:51:12.440639 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.440650 | orchestrator | 2026-02-03 06:51:12.440661 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:51:12.440672 | orchestrator | Tuesday 03 February 2026 06:50:40 +0000 (0:00:01.395) 0:55:54.063 ****** 2026-02-03 06:51:12.440683 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440695 | orchestrator | 2026-02-03 06:51:12.440706 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:51:12.440717 | orchestrator | Tuesday 03 February 2026 06:50:42 +0000 (0:00:01.217) 0:55:55.280 ****** 2026-02-03 06:51:12.440727 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440739 | orchestrator | 2026-02-03 06:51:12.440749 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:51:12.440760 | orchestrator | Tuesday 03 February 2026 06:50:43 +0000 (0:00:01.184) 0:55:56.464 ****** 2026-02-03 06:51:12.440771 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440782 | orchestrator | 2026-02-03 06:51:12.440811 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:51:12.440823 | orchestrator | Tuesday 03 February 2026 06:50:44 +0000 (0:00:01.131) 0:55:57.596 ****** 2026-02-03 06:51:12.440852 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440864 | orchestrator | 2026-02-03 06:51:12.440875 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:51:12.440907 | orchestrator | Tuesday 03 February 2026 06:50:45 +0000 (0:00:01.320) 0:55:58.917 ****** 2026-02-03 06:51:12.440918 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440929 | orchestrator | 2026-02-03 06:51:12.440940 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:51:12.440951 | orchestrator | Tuesday 03 February 2026 06:50:46 +0000 (0:00:01.243) 0:56:00.160 ****** 2026-02-03 06:51:12.440976 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.440987 | orchestrator | 2026-02-03 06:51:12.440999 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:51:12.441010 | orchestrator | Tuesday 03 February 2026 06:50:48 +0000 (0:00:01.221) 0:56:01.382 ****** 2026-02-03 06:51:12.441020 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441031 | orchestrator | 2026-02-03 06:51:12.441042 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:51:12.441054 | orchestrator | Tuesday 03 February 2026 06:50:49 +0000 (0:00:01.161) 0:56:02.544 ****** 2026-02-03 06:51:12.441065 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441076 | orchestrator | 2026-02-03 06:51:12.441087 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:51:12.441098 | orchestrator | Tuesday 03 February 2026 06:50:50 +0000 (0:00:01.262) 0:56:03.807 ****** 2026-02-03 06:51:12.441108 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441119 | orchestrator | 2026-02-03 06:51:12.441130 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:51:12.441141 | orchestrator | Tuesday 03 February 2026 06:50:51 +0000 (0:00:01.184) 0:56:04.991 ****** 2026-02-03 06:51:12.441152 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441163 | orchestrator | 2026-02-03 06:51:12.441173 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:51:12.441184 | orchestrator | Tuesday 03 February 2026 06:50:53 +0000 (0:00:01.235) 0:56:06.226 ****** 2026-02-03 06:51:12.441195 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441206 | orchestrator | 2026-02-03 06:51:12.441217 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:51:12.441228 | orchestrator | Tuesday 03 February 2026 06:50:54 +0000 (0:00:01.251) 0:56:07.478 ****** 2026-02-03 06:51:12.441239 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441249 | orchestrator | 2026-02-03 06:51:12.441260 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:51:12.441271 | orchestrator | Tuesday 03 February 2026 06:50:55 +0000 (0:00:01.366) 0:56:08.844 ****** 2026-02-03 06:51:12.441282 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.441293 | orchestrator | 2026-02-03 06:51:12.441304 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:51:12.441315 | orchestrator | Tuesday 03 February 2026 06:50:57 +0000 (0:00:01.992) 0:56:10.837 ****** 2026-02-03 06:51:12.441325 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.441336 | orchestrator | 2026-02-03 06:51:12.441347 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:51:12.441358 | orchestrator | Tuesday 03 February 2026 06:51:00 +0000 (0:00:02.436) 0:56:13.274 ****** 2026-02-03 06:51:12.441369 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-03 06:51:12.441381 | orchestrator | 2026-02-03 06:51:12.441392 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:51:12.441403 | orchestrator | Tuesday 03 February 2026 06:51:01 +0000 (0:00:01.249) 0:56:14.523 ****** 2026-02-03 06:51:12.441414 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441425 | orchestrator | 2026-02-03 06:51:12.441436 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:51:12.441447 | orchestrator | Tuesday 03 February 2026 06:51:02 +0000 (0:00:01.220) 0:56:15.743 ****** 2026-02-03 06:51:12.441458 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441469 | orchestrator | 2026-02-03 06:51:12.441479 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:51:12.441490 | orchestrator | Tuesday 03 February 2026 06:51:03 +0000 (0:00:01.157) 0:56:16.901 ****** 2026-02-03 06:51:12.441501 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:51:12.441512 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:51:12.441530 | orchestrator | 2026-02-03 06:51:12.441541 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:51:12.441552 | orchestrator | Tuesday 03 February 2026 06:51:05 +0000 (0:00:02.013) 0:56:18.914 ****** 2026-02-03 06:51:12.441563 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:51:12.441574 | orchestrator | 2026-02-03 06:51:12.441584 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:51:12.441595 | orchestrator | Tuesday 03 February 2026 06:51:07 +0000 (0:00:01.595) 0:56:20.510 ****** 2026-02-03 06:51:12.441606 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441617 | orchestrator | 2026-02-03 06:51:12.441628 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:51:12.441639 | orchestrator | Tuesday 03 February 2026 06:51:08 +0000 (0:00:01.248) 0:56:21.758 ****** 2026-02-03 06:51:12.441650 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441660 | orchestrator | 2026-02-03 06:51:12.441671 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:51:12.441682 | orchestrator | Tuesday 03 February 2026 06:51:09 +0000 (0:00:01.281) 0:56:23.040 ****** 2026-02-03 06:51:12.441693 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:51:12.441704 | orchestrator | 2026-02-03 06:51:12.441715 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:51:12.441726 | orchestrator | Tuesday 03 February 2026 06:51:10 +0000 (0:00:01.141) 0:56:24.182 ****** 2026-02-03 06:51:12.441737 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-03 06:51:12.441748 | orchestrator | 2026-02-03 06:51:12.441764 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:51:12.441783 | orchestrator | Tuesday 03 February 2026 06:51:12 +0000 (0:00:01.428) 0:56:25.610 ****** 2026-02-03 06:52:01.674599 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:52:01.674859 | orchestrator | 2026-02-03 06:52:01.674880 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:52:01.674891 | orchestrator | Tuesday 03 February 2026 06:51:14 +0000 (0:00:01.795) 0:56:27.406 ****** 2026-02-03 06:52:01.674902 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:52:01.674911 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:52:01.674920 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:52:01.674929 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.674939 | orchestrator | 2026-02-03 06:52:01.674948 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:52:01.674957 | orchestrator | Tuesday 03 February 2026 06:51:15 +0000 (0:00:01.288) 0:56:28.695 ****** 2026-02-03 06:52:01.674966 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.674975 | orchestrator | 2026-02-03 06:52:01.674983 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:52:01.674995 | orchestrator | Tuesday 03 February 2026 06:51:16 +0000 (0:00:01.177) 0:56:29.873 ****** 2026-02-03 06:52:01.675010 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675020 | orchestrator | 2026-02-03 06:52:01.675029 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:52:01.675038 | orchestrator | Tuesday 03 February 2026 06:51:18 +0000 (0:00:01.346) 0:56:31.220 ****** 2026-02-03 06:52:01.675047 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675055 | orchestrator | 2026-02-03 06:52:01.675068 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:52:01.675082 | orchestrator | Tuesday 03 February 2026 06:51:19 +0000 (0:00:01.193) 0:56:32.413 ****** 2026-02-03 06:52:01.675096 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675110 | orchestrator | 2026-02-03 06:52:01.675125 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:52:01.675168 | orchestrator | Tuesday 03 February 2026 06:51:20 +0000 (0:00:01.245) 0:56:33.658 ****** 2026-02-03 06:52:01.675179 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675190 | orchestrator | 2026-02-03 06:52:01.675200 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:52:01.675210 | orchestrator | Tuesday 03 February 2026 06:51:21 +0000 (0:00:01.229) 0:56:34.888 ****** 2026-02-03 06:52:01.675221 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:52:01.675231 | orchestrator | 2026-02-03 06:52:01.675242 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:52:01.675253 | orchestrator | Tuesday 03 February 2026 06:51:24 +0000 (0:00:02.607) 0:56:37.496 ****** 2026-02-03 06:52:01.675263 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:52:01.675273 | orchestrator | 2026-02-03 06:52:01.675284 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:52:01.675295 | orchestrator | Tuesday 03 February 2026 06:51:25 +0000 (0:00:01.197) 0:56:38.693 ****** 2026-02-03 06:52:01.675304 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-03 06:52:01.675313 | orchestrator | 2026-02-03 06:52:01.675322 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:52:01.675330 | orchestrator | Tuesday 03 February 2026 06:51:26 +0000 (0:00:01.161) 0:56:39.854 ****** 2026-02-03 06:52:01.675339 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675348 | orchestrator | 2026-02-03 06:52:01.675357 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:52:01.675365 | orchestrator | Tuesday 03 February 2026 06:51:27 +0000 (0:00:01.189) 0:56:41.043 ****** 2026-02-03 06:52:01.675374 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675382 | orchestrator | 2026-02-03 06:52:01.675391 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:52:01.675400 | orchestrator | Tuesday 03 February 2026 06:51:29 +0000 (0:00:01.347) 0:56:42.391 ****** 2026-02-03 06:52:01.675408 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675417 | orchestrator | 2026-02-03 06:52:01.675426 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:52:01.675434 | orchestrator | Tuesday 03 February 2026 06:51:30 +0000 (0:00:01.253) 0:56:43.644 ****** 2026-02-03 06:52:01.675443 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675451 | orchestrator | 2026-02-03 06:52:01.675460 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:52:01.675469 | orchestrator | Tuesday 03 February 2026 06:51:31 +0000 (0:00:01.199) 0:56:44.844 ****** 2026-02-03 06:52:01.675478 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675486 | orchestrator | 2026-02-03 06:52:01.675495 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:52:01.675503 | orchestrator | Tuesday 03 February 2026 06:51:32 +0000 (0:00:01.241) 0:56:46.086 ****** 2026-02-03 06:52:01.675512 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675520 | orchestrator | 2026-02-03 06:52:01.675529 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:52:01.675538 | orchestrator | Tuesday 03 February 2026 06:51:34 +0000 (0:00:01.249) 0:56:47.336 ****** 2026-02-03 06:52:01.675546 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675555 | orchestrator | 2026-02-03 06:52:01.675564 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:52:01.675572 | orchestrator | Tuesday 03 February 2026 06:51:35 +0000 (0:00:01.202) 0:56:48.538 ****** 2026-02-03 06:52:01.675581 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.675589 | orchestrator | 2026-02-03 06:52:01.675598 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:52:01.675607 | orchestrator | Tuesday 03 February 2026 06:51:36 +0000 (0:00:01.206) 0:56:49.745 ****** 2026-02-03 06:52:01.675629 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:52:01.675639 | orchestrator | 2026-02-03 06:52:01.675647 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:52:01.675681 | orchestrator | Tuesday 03 February 2026 06:51:37 +0000 (0:00:01.246) 0:56:50.991 ****** 2026-02-03 06:52:01.675691 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-03 06:52:01.675701 | orchestrator | 2026-02-03 06:52:01.675710 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:52:01.675718 | orchestrator | Tuesday 03 February 2026 06:51:38 +0000 (0:00:01.142) 0:56:52.134 ****** 2026-02-03 06:52:01.675727 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-03 06:52:01.675736 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-03 06:52:01.675745 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-03 06:52:01.675754 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-03 06:52:01.675762 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-03 06:52:01.675771 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-03 06:52:01.675780 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-03 06:52:01.675817 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:52:01.675827 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:52:01.675836 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:52:01.675845 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:52:01.675854 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:52:01.675862 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:52:01.675871 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:52:01.675879 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-03 06:52:01.675888 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-03 06:52:01.675897 | orchestrator | 2026-02-03 06:52:01.675905 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:52:01.675914 | orchestrator | Tuesday 03 February 2026 06:51:46 +0000 (0:00:07.106) 0:56:59.240 ****** 2026-02-03 06:52:01.675922 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-03 06:52:01.675931 | orchestrator | 2026-02-03 06:52:01.675940 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-03 06:52:01.675948 | orchestrator | Tuesday 03 February 2026 06:51:47 +0000 (0:00:01.153) 0:57:00.394 ****** 2026-02-03 06:52:01.675957 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 06:52:01.675967 | orchestrator | 2026-02-03 06:52:01.675976 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-03 06:52:01.675984 | orchestrator | Tuesday 03 February 2026 06:51:48 +0000 (0:00:01.633) 0:57:02.028 ****** 2026-02-03 06:52:01.675993 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 06:52:01.676002 | orchestrator | 2026-02-03 06:52:01.676010 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:52:01.676019 | orchestrator | Tuesday 03 February 2026 06:51:50 +0000 (0:00:02.038) 0:57:04.066 ****** 2026-02-03 06:52:01.676028 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.676036 | orchestrator | 2026-02-03 06:52:01.676045 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:52:01.676054 | orchestrator | Tuesday 03 February 2026 06:51:52 +0000 (0:00:01.181) 0:57:05.248 ****** 2026-02-03 06:52:01.676062 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.676071 | orchestrator | 2026-02-03 06:52:01.676079 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:52:01.676088 | orchestrator | Tuesday 03 February 2026 06:51:53 +0000 (0:00:01.201) 0:57:06.450 ****** 2026-02-03 06:52:01.676102 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.676111 | orchestrator | 2026-02-03 06:52:01.676120 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:52:01.676128 | orchestrator | Tuesday 03 February 2026 06:51:54 +0000 (0:00:01.195) 0:57:07.645 ****** 2026-02-03 06:52:01.676137 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.676145 | orchestrator | 2026-02-03 06:52:01.676154 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:52:01.676163 | orchestrator | Tuesday 03 February 2026 06:51:55 +0000 (0:00:01.184) 0:57:08.830 ****** 2026-02-03 06:52:01.676171 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.676180 | orchestrator | 2026-02-03 06:52:01.676189 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:52:01.676197 | orchestrator | Tuesday 03 February 2026 06:51:56 +0000 (0:00:01.214) 0:57:10.044 ****** 2026-02-03 06:52:01.676206 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.676215 | orchestrator | 2026-02-03 06:52:01.676223 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:52:01.676232 | orchestrator | Tuesday 03 February 2026 06:51:58 +0000 (0:00:01.238) 0:57:11.283 ****** 2026-02-03 06:52:01.676241 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.676249 | orchestrator | 2026-02-03 06:52:01.676258 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:52:01.676267 | orchestrator | Tuesday 03 February 2026 06:51:59 +0000 (0:00:01.195) 0:57:12.479 ****** 2026-02-03 06:52:01.676275 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.676284 | orchestrator | 2026-02-03 06:52:01.676297 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:52:01.676306 | orchestrator | Tuesday 03 February 2026 06:52:00 +0000 (0:00:01.191) 0:57:13.671 ****** 2026-02-03 06:52:01.676315 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:52:01.676324 | orchestrator | 2026-02-03 06:52:01.676338 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:53:01.549683 | orchestrator | Tuesday 03 February 2026 06:52:01 +0000 (0:00:01.175) 0:57:14.846 ****** 2026-02-03 06:53:01.549866 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.549885 | orchestrator | 2026-02-03 06:53:01.549897 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:53:01.549909 | orchestrator | Tuesday 03 February 2026 06:52:02 +0000 (0:00:01.138) 0:57:15.984 ****** 2026-02-03 06:53:01.549920 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.549931 | orchestrator | 2026-02-03 06:53:01.549942 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:53:01.549953 | orchestrator | Tuesday 03 February 2026 06:52:04 +0000 (0:00:01.249) 0:57:17.234 ****** 2026-02-03 06:53:01.549964 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-03 06:53:01.549975 | orchestrator | 2026-02-03 06:53:01.549987 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:53:01.549998 | orchestrator | Tuesday 03 February 2026 06:52:09 +0000 (0:00:05.124) 0:57:22.359 ****** 2026-02-03 06:53:01.550010 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 06:53:01.550088 | orchestrator | 2026-02-03 06:53:01.550100 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:53:01.550111 | orchestrator | Tuesday 03 February 2026 06:52:10 +0000 (0:00:01.197) 0:57:23.557 ****** 2026-02-03 06:53:01.550193 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-03 06:53:01.550238 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-03 06:53:01.550252 | orchestrator | 2026-02-03 06:53:01.550263 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:53:01.550274 | orchestrator | Tuesday 03 February 2026 06:52:15 +0000 (0:00:05.149) 0:57:28.706 ****** 2026-02-03 06:53:01.550285 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.550296 | orchestrator | 2026-02-03 06:53:01.550307 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:53:01.550317 | orchestrator | Tuesday 03 February 2026 06:52:16 +0000 (0:00:01.168) 0:57:29.874 ****** 2026-02-03 06:53:01.550328 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.550339 | orchestrator | 2026-02-03 06:53:01.550350 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:53:01.550361 | orchestrator | Tuesday 03 February 2026 06:52:17 +0000 (0:00:01.161) 0:57:31.036 ****** 2026-02-03 06:53:01.550372 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.550383 | orchestrator | 2026-02-03 06:53:01.550394 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:53:01.550405 | orchestrator | Tuesday 03 February 2026 06:52:19 +0000 (0:00:01.334) 0:57:32.370 ****** 2026-02-03 06:53:01.550415 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.550426 | orchestrator | 2026-02-03 06:53:01.550437 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:53:01.550448 | orchestrator | Tuesday 03 February 2026 06:52:20 +0000 (0:00:01.202) 0:57:33.573 ****** 2026-02-03 06:53:01.550459 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.550470 | orchestrator | 2026-02-03 06:53:01.550480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:53:01.550491 | orchestrator | Tuesday 03 February 2026 06:52:21 +0000 (0:00:01.274) 0:57:34.848 ****** 2026-02-03 06:53:01.550502 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.550514 | orchestrator | 2026-02-03 06:53:01.550525 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:53:01.550535 | orchestrator | Tuesday 03 February 2026 06:52:22 +0000 (0:00:01.325) 0:57:36.174 ****** 2026-02-03 06:53:01.550546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:53:01.550558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:53:01.550569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:53:01.550580 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.550590 | orchestrator | 2026-02-03 06:53:01.550601 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:53:01.550612 | orchestrator | Tuesday 03 February 2026 06:52:24 +0000 (0:00:01.510) 0:57:37.684 ****** 2026-02-03 06:53:01.550623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:53:01.550634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:53:01.550645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:53:01.550656 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.550667 | orchestrator | 2026-02-03 06:53:01.550678 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:53:01.550702 | orchestrator | Tuesday 03 February 2026 06:52:26 +0000 (0:00:01.595) 0:57:39.280 ****** 2026-02-03 06:53:01.550714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:53:01.550725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:53:01.550736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:53:01.550764 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.550808 | orchestrator | 2026-02-03 06:53:01.550820 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:53:01.550831 | orchestrator | Tuesday 03 February 2026 06:52:27 +0000 (0:00:01.848) 0:57:41.128 ****** 2026-02-03 06:53:01.550842 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.550853 | orchestrator | 2026-02-03 06:53:01.550864 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:53:01.550875 | orchestrator | Tuesday 03 February 2026 06:52:29 +0000 (0:00:01.244) 0:57:42.373 ****** 2026-02-03 06:53:01.550886 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 06:53:01.550897 | orchestrator | 2026-02-03 06:53:01.550907 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:53:01.550918 | orchestrator | Tuesday 03 February 2026 06:52:31 +0000 (0:00:02.083) 0:57:44.457 ****** 2026-02-03 06:53:01.550929 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.550940 | orchestrator | 2026-02-03 06:53:01.550951 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-03 06:53:01.550962 | orchestrator | Tuesday 03 February 2026 06:52:33 +0000 (0:00:01.835) 0:57:46.293 ****** 2026-02-03 06:53:01.550973 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.550984 | orchestrator | 2026-02-03 06:53:01.550995 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-03 06:53:01.551006 | orchestrator | Tuesday 03 February 2026 06:52:34 +0000 (0:00:01.269) 0:57:47.562 ****** 2026-02-03 06:53:01.551017 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3 2026-02-03 06:53:01.551028 | orchestrator | 2026-02-03 06:53:01.551039 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-03 06:53:01.551050 | orchestrator | Tuesday 03 February 2026 06:52:35 +0000 (0:00:01.561) 0:57:49.124 ****** 2026-02-03 06:53:01.551061 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-03 06:53:01.551072 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-03 06:53:01.551083 | orchestrator | 2026-02-03 06:53:01.551093 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-03 06:53:01.551104 | orchestrator | Tuesday 03 February 2026 06:52:37 +0000 (0:00:01.935) 0:57:51.059 ****** 2026-02-03 06:53:01.551115 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:53:01.551126 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 06:53:01.551138 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 06:53:01.551149 | orchestrator | 2026-02-03 06:53:01.551160 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-03 06:53:01.551171 | orchestrator | Tuesday 03 February 2026 06:52:41 +0000 (0:00:03.294) 0:57:54.353 ****** 2026-02-03 06:53:01.551182 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-03 06:53:01.551193 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 06:53:01.551204 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.551215 | orchestrator | 2026-02-03 06:53:01.551226 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-03 06:53:01.551237 | orchestrator | Tuesday 03 February 2026 06:52:43 +0000 (0:00:02.089) 0:57:56.443 ****** 2026-02-03 06:53:01.551248 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.551259 | orchestrator | 2026-02-03 06:53:01.551270 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-03 06:53:01.551281 | orchestrator | Tuesday 03 February 2026 06:52:44 +0000 (0:00:01.542) 0:57:57.986 ****** 2026-02-03 06:53:01.551292 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:01.551303 | orchestrator | 2026-02-03 06:53:01.551313 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-03 06:53:01.551324 | orchestrator | Tuesday 03 February 2026 06:52:45 +0000 (0:00:01.200) 0:57:59.186 ****** 2026-02-03 06:53:01.551335 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3 2026-02-03 06:53:01.551354 | orchestrator | 2026-02-03 06:53:01.551365 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-03 06:53:01.551376 | orchestrator | Tuesday 03 February 2026 06:52:48 +0000 (0:00:02.035) 0:58:01.221 ****** 2026-02-03 06:53:01.551387 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3 2026-02-03 06:53:01.551398 | orchestrator | 2026-02-03 06:53:01.551409 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-03 06:53:01.551420 | orchestrator | Tuesday 03 February 2026 06:52:49 +0000 (0:00:01.507) 0:58:02.728 ****** 2026-02-03 06:53:01.551430 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.551441 | orchestrator | 2026-02-03 06:53:01.551452 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-03 06:53:01.551463 | orchestrator | Tuesday 03 February 2026 06:52:51 +0000 (0:00:02.212) 0:58:04.941 ****** 2026-02-03 06:53:01.551474 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.551485 | orchestrator | 2026-02-03 06:53:01.551496 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-03 06:53:01.551507 | orchestrator | Tuesday 03 February 2026 06:52:53 +0000 (0:00:02.059) 0:58:07.000 ****** 2026-02-03 06:53:01.551518 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.551528 | orchestrator | 2026-02-03 06:53:01.551539 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-03 06:53:01.551550 | orchestrator | Tuesday 03 February 2026 06:52:56 +0000 (0:00:02.418) 0:58:09.418 ****** 2026-02-03 06:53:01.551561 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.551572 | orchestrator | 2026-02-03 06:53:01.551583 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-03 06:53:01.551618 | orchestrator | Tuesday 03 February 2026 06:52:58 +0000 (0:00:02.412) 0:58:11.831 ****** 2026-02-03 06:53:01.551630 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:01.551641 | orchestrator | 2026-02-03 06:53:01.551652 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-03 06:53:01.551663 | orchestrator | Tuesday 03 February 2026 06:53:00 +0000 (0:00:01.719) 0:58:13.550 ****** 2026-02-03 06:53:01.551682 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:53:37.506893 | orchestrator | 2026-02-03 06:53:37.507028 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-03 06:53:37.507047 | orchestrator | Tuesday 03 February 2026 06:53:01 +0000 (0:00:01.176) 0:58:14.726 ****** 2026-02-03 06:53:37.507060 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:53:37.507117 | orchestrator | 2026-02-03 06:53:37.507131 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-03 06:53:37.507143 | orchestrator | 2026-02-03 06:53:37.507155 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:53:37.507166 | orchestrator | Tuesday 03 February 2026 06:53:10 +0000 (0:00:09.074) 0:58:23.801 ****** 2026-02-03 06:53:37.507178 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-5 2026-02-03 06:53:37.507190 | orchestrator | 2026-02-03 06:53:37.507200 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:53:37.507212 | orchestrator | Tuesday 03 February 2026 06:53:12 +0000 (0:00:01.749) 0:58:25.550 ****** 2026-02-03 06:53:37.507222 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:37.507234 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:37.507244 | orchestrator | 2026-02-03 06:53:37.507256 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:53:37.507266 | orchestrator | Tuesday 03 February 2026 06:53:14 +0000 (0:00:01.733) 0:58:27.284 ****** 2026-02-03 06:53:37.507277 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:37.507288 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:37.507299 | orchestrator | 2026-02-03 06:53:37.507311 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:53:37.507324 | orchestrator | Tuesday 03 February 2026 06:53:15 +0000 (0:00:01.379) 0:58:28.663 ****** 2026-02-03 06:53:37.507359 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:37.507371 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:37.507384 | orchestrator | 2026-02-03 06:53:37.507397 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:53:37.507410 | orchestrator | Tuesday 03 February 2026 06:53:17 +0000 (0:00:01.656) 0:58:30.320 ****** 2026-02-03 06:53:37.507423 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:37.507435 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:37.507448 | orchestrator | 2026-02-03 06:53:37.507460 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:53:37.507473 | orchestrator | Tuesday 03 February 2026 06:53:18 +0000 (0:00:01.307) 0:58:31.627 ****** 2026-02-03 06:53:37.507484 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:37.507495 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:37.507506 | orchestrator | 2026-02-03 06:53:37.507517 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:53:37.507528 | orchestrator | Tuesday 03 February 2026 06:53:19 +0000 (0:00:01.295) 0:58:32.923 ****** 2026-02-03 06:53:37.507539 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:37.507550 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:37.507561 | orchestrator | 2026-02-03 06:53:37.507573 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:53:37.507584 | orchestrator | Tuesday 03 February 2026 06:53:21 +0000 (0:00:01.701) 0:58:34.625 ****** 2026-02-03 06:53:37.507595 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:37.507606 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:53:37.507617 | orchestrator | 2026-02-03 06:53:37.507628 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:53:37.507639 | orchestrator | Tuesday 03 February 2026 06:53:22 +0000 (0:00:01.391) 0:58:36.017 ****** 2026-02-03 06:53:37.507650 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:37.507661 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:37.507672 | orchestrator | 2026-02-03 06:53:37.507683 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:53:37.507694 | orchestrator | Tuesday 03 February 2026 06:53:24 +0000 (0:00:01.345) 0:58:37.363 ****** 2026-02-03 06:53:37.507705 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:53:37.507716 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:53:37.507727 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:53:37.507738 | orchestrator | 2026-02-03 06:53:37.507748 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:53:37.507759 | orchestrator | Tuesday 03 February 2026 06:53:26 +0000 (0:00:01.901) 0:58:39.265 ****** 2026-02-03 06:53:37.507770 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:37.507801 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:37.507813 | orchestrator | 2026-02-03 06:53:37.507824 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:53:37.507835 | orchestrator | Tuesday 03 February 2026 06:53:27 +0000 (0:00:01.493) 0:58:40.758 ****** 2026-02-03 06:53:37.507845 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:53:37.507857 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:53:37.507867 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:53:37.507878 | orchestrator | 2026-02-03 06:53:37.507889 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:53:37.507900 | orchestrator | Tuesday 03 February 2026 06:53:31 +0000 (0:00:03.490) 0:58:44.249 ****** 2026-02-03 06:53:37.507911 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 06:53:37.507924 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 06:53:37.507935 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 06:53:37.507955 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:37.507966 | orchestrator | 2026-02-03 06:53:37.507991 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:53:37.508003 | orchestrator | Tuesday 03 February 2026 06:53:32 +0000 (0:00:01.515) 0:58:45.764 ****** 2026-02-03 06:53:37.508034 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:53:37.508049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:53:37.508061 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:53:37.508072 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:37.508083 | orchestrator | 2026-02-03 06:53:37.508094 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:53:37.508105 | orchestrator | Tuesday 03 February 2026 06:53:34 +0000 (0:00:02.146) 0:58:47.910 ****** 2026-02-03 06:53:37.508118 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:53:37.508134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:53:37.508145 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:53:37.508156 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:37.508167 | orchestrator | 2026-02-03 06:53:37.508178 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:53:37.508189 | orchestrator | Tuesday 03 February 2026 06:53:36 +0000 (0:00:01.293) 0:58:49.203 ****** 2026-02-03 06:53:37.508202 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:53:28.124759', 'end': '2026-02-03 06:53:28.180764', 'delta': '0:00:00.056005', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:53:37.508217 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:53:28.748502', 'end': '2026-02-03 06:53:28.799253', 'delta': '0:00:00.050751', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:53:37.508250 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:53:29.781343', 'end': '2026-02-03 06:53:29.836537', 'delta': '0:00:00.055194', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:53:58.866368 | orchestrator | 2026-02-03 06:53:58.866499 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:53:58.866517 | orchestrator | Tuesday 03 February 2026 06:53:37 +0000 (0:00:01.472) 0:58:50.676 ****** 2026-02-03 06:53:58.866529 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:58.866542 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:58.866553 | orchestrator | 2026-02-03 06:53:58.866564 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:53:58.866576 | orchestrator | Tuesday 03 February 2026 06:53:39 +0000 (0:00:01.578) 0:58:52.254 ****** 2026-02-03 06:53:58.866587 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:58.866609 | orchestrator | 2026-02-03 06:53:58.866620 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:53:58.866631 | orchestrator | Tuesday 03 February 2026 06:53:40 +0000 (0:00:01.374) 0:58:53.629 ****** 2026-02-03 06:53:58.866642 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:58.866653 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:58.866664 | orchestrator | 2026-02-03 06:53:58.866675 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:53:58.866686 | orchestrator | Tuesday 03 February 2026 06:53:41 +0000 (0:00:01.317) 0:58:54.946 ****** 2026-02-03 06:53:58.866697 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:53:58.866709 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:53:58.866720 | orchestrator | 2026-02-03 06:53:58.866731 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:53:58.866742 | orchestrator | Tuesday 03 February 2026 06:53:44 +0000 (0:00:02.294) 0:58:57.240 ****** 2026-02-03 06:53:58.866753 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:58.866764 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:58.866828 | orchestrator | 2026-02-03 06:53:58.866842 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:53:58.866853 | orchestrator | Tuesday 03 February 2026 06:53:45 +0000 (0:00:01.363) 0:58:58.604 ****** 2026-02-03 06:53:58.866864 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:58.866876 | orchestrator | 2026-02-03 06:53:58.866887 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:53:58.866897 | orchestrator | Tuesday 03 February 2026 06:53:46 +0000 (0:00:01.206) 0:58:59.811 ****** 2026-02-03 06:53:58.866908 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:58.866919 | orchestrator | 2026-02-03 06:53:58.866930 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:53:58.866940 | orchestrator | Tuesday 03 February 2026 06:53:47 +0000 (0:00:01.286) 0:59:01.098 ****** 2026-02-03 06:53:58.866978 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:58.866990 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:53:58.867001 | orchestrator | 2026-02-03 06:53:58.867012 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:53:58.867022 | orchestrator | Tuesday 03 February 2026 06:53:49 +0000 (0:00:01.408) 0:59:02.506 ****** 2026-02-03 06:53:58.867033 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:58.867044 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:53:58.867055 | orchestrator | 2026-02-03 06:53:58.867066 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:53:58.867077 | orchestrator | Tuesday 03 February 2026 06:53:50 +0000 (0:00:01.326) 0:59:03.833 ****** 2026-02-03 06:53:58.867088 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:58.867099 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:58.867110 | orchestrator | 2026-02-03 06:53:58.867120 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:53:58.867131 | orchestrator | Tuesday 03 February 2026 06:53:51 +0000 (0:00:01.310) 0:59:05.144 ****** 2026-02-03 06:53:58.867142 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:58.867153 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:53:58.867164 | orchestrator | 2026-02-03 06:53:58.867175 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:53:58.867186 | orchestrator | Tuesday 03 February 2026 06:53:53 +0000 (0:00:01.623) 0:59:06.767 ****** 2026-02-03 06:53:58.867197 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:58.867207 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:58.867218 | orchestrator | 2026-02-03 06:53:58.867229 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:53:58.867240 | orchestrator | Tuesday 03 February 2026 06:53:55 +0000 (0:00:01.745) 0:59:08.513 ****** 2026-02-03 06:53:58.867251 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:58.867262 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:53:58.867273 | orchestrator | 2026-02-03 06:53:58.867284 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:53:58.867295 | orchestrator | Tuesday 03 February 2026 06:53:56 +0000 (0:00:01.399) 0:59:09.912 ****** 2026-02-03 06:53:58.867306 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:53:58.867317 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:53:58.867328 | orchestrator | 2026-02-03 06:53:58.867339 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:53:58.867363 | orchestrator | Tuesday 03 February 2026 06:53:58 +0000 (0:00:01.571) 0:59:11.484 ****** 2026-02-03 06:53:58.867377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:53:58.867412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'uuids': ['ee84a40a-c8f5-4363-8b92-865eb14b3049'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt']}})  2026-02-03 06:53:58.867428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15b94581', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:53:58.867449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362']}})  2026-02-03 06:53:58.867462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:53:58.867474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:53:58.867486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:53:58.867503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:53:58.867524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun', 'dm-uuid-CRYPT-LUKS2-1805b057808e47489bd25959cb85c8e5-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:53:59.160274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:53:59.160409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'uuids': ['1805b057-808e-4748-9bd2-5959cb85c8e5'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun']}})  2026-02-03 06:53:59.160428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291']}})  2026-02-03 06:53:59.160442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:53:59.160494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9ac79520', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:53:59.160518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:53:59.160530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:53:59.160542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt', 'dm-uuid-CRYPT-LUKS2-ee84a40ac8f543638b92865eb14b3049-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:53:59.160555 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:53:59.160568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:53:59.160580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'uuids': ['de4b76bf-9af2-40ae-a6b3-4edbecd71396'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a']}})  2026-02-03 06:53:59.160598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ed5f26b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:53:59.160618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb']}})  2026-02-03 06:54:00.415439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:54:00.415547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:54:00.415564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:54:00.415579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:54:00.415591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca', 'dm-uuid-CRYPT-LUKS2-828a04c154134531b57bb1d5e612c63b-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:54:00.415620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:54:00.415634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'uuids': ['828a04c1-5413-4531-b57b-b1d5e612c63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca']}})  2026-02-03 06:54:00.415688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8']}})  2026-02-03 06:54:00.415702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:54:00.415724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e34e583', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:54:00.415739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:54:00.415759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:54:00.415870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a', 'dm-uuid-CRYPT-LUKS2-de4b76bf9af240aea6b34edbecd71396-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:54:00.663394 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:54:00.663548 | orchestrator | 2026-02-03 06:54:00.663577 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:54:00.663622 | orchestrator | Tuesday 03 February 2026 06:54:00 +0000 (0:00:02.108) 0:59:13.592 ****** 2026-02-03 06:54:00.663662 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663686 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'uuids': ['ee84a40a-c8f5-4363-8b92-865eb14b3049'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663700 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15b94581', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663828 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663844 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663856 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663868 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663886 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun', 'dm-uuid-CRYPT-LUKS2-1805b057808e47489bd25959cb85c8e5-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663923 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.663944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'uuids': ['de4b76bf-9af2-40ae-a6b3-4edbecd71396'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.735698 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'uuids': ['1805b057-808e-4748-9bd2-5959cb85c8e5'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.735866 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.735907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ed5f26b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.735921 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.735953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.735974 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9ac79520', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.735994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.736006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.736025 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.842730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.842911 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.842966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt', 'dm-uuid-CRYPT-LUKS2-ee84a40ac8f543638b92865eb14b3049-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.842980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.842992 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:00.843007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca', 'dm-uuid-CRYPT-LUKS2-828a04c154134531b57bb1d5e612c63b-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.843036 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.843050 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'uuids': ['828a04c1-5413-4531-b57b-b1d5e612c63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.843076 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.843091 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:00.843113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e34e583', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:32.412112 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:32.412284 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:32.412302 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a', 'dm-uuid-CRYPT-LUKS2-de4b76bf9af240aea6b34edbecd71396-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:54:32.412314 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:54:32.412326 | orchestrator | 2026-02-03 06:54:32.412336 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:54:32.412346 | orchestrator | Tuesday 03 February 2026 06:54:02 +0000 (0:00:01.647) 0:59:15.241 ****** 2026-02-03 06:54:32.412355 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:54:32.412364 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:54:32.412373 | orchestrator | 2026-02-03 06:54:32.412382 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:54:32.412391 | orchestrator | Tuesday 03 February 2026 06:54:03 +0000 (0:00:01.719) 0:59:16.960 ****** 2026-02-03 06:54:32.412399 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:54:32.412408 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:54:32.412416 | orchestrator | 2026-02-03 06:54:32.412425 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:54:32.412434 | orchestrator | Tuesday 03 February 2026 06:54:05 +0000 (0:00:01.324) 0:59:18.284 ****** 2026-02-03 06:54:32.412442 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:54:32.412451 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:54:32.412459 | orchestrator | 2026-02-03 06:54:32.412468 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:54:32.412477 | orchestrator | Tuesday 03 February 2026 06:54:06 +0000 (0:00:01.727) 0:59:20.012 ****** 2026-02-03 06:54:32.412485 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.412495 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:54:32.412503 | orchestrator | 2026-02-03 06:54:32.412512 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:54:32.412521 | orchestrator | Tuesday 03 February 2026 06:54:08 +0000 (0:00:01.312) 0:59:21.324 ****** 2026-02-03 06:54:32.412530 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.412538 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:54:32.412547 | orchestrator | 2026-02-03 06:54:32.412556 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:54:32.412564 | orchestrator | Tuesday 03 February 2026 06:54:10 +0000 (0:00:01.916) 0:59:23.241 ****** 2026-02-03 06:54:32.412586 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.412595 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:54:32.412604 | orchestrator | 2026-02-03 06:54:32.412613 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:54:32.412621 | orchestrator | Tuesday 03 February 2026 06:54:11 +0000 (0:00:01.389) 0:59:24.630 ****** 2026-02-03 06:54:32.412630 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-03 06:54:32.412639 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-03 06:54:32.412649 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-03 06:54:32.412660 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-03 06:54:32.412670 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-03 06:54:32.412679 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-03 06:54:32.412690 | orchestrator | 2026-02-03 06:54:32.412700 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:54:32.412710 | orchestrator | Tuesday 03 February 2026 06:54:13 +0000 (0:00:01.899) 0:59:26.530 ****** 2026-02-03 06:54:32.412736 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 06:54:32.412747 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 06:54:32.412757 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 06:54:32.412767 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.412807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-03 06:54:32.412817 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-03 06:54:32.412828 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-03 06:54:32.412838 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:54:32.412849 | orchestrator | 2026-02-03 06:54:32.412859 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:54:32.412870 | orchestrator | Tuesday 03 February 2026 06:54:14 +0000 (0:00:01.384) 0:59:27.914 ****** 2026-02-03 06:54:32.412880 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-5 2026-02-03 06:54:32.412892 | orchestrator | 2026-02-03 06:54:32.412907 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:54:32.412919 | orchestrator | Tuesday 03 February 2026 06:54:16 +0000 (0:00:01.378) 0:59:29.292 ****** 2026-02-03 06:54:32.412929 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.412939 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:54:32.412949 | orchestrator | 2026-02-03 06:54:32.412960 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:54:32.412970 | orchestrator | Tuesday 03 February 2026 06:54:17 +0000 (0:00:01.325) 0:59:30.618 ****** 2026-02-03 06:54:32.412980 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.412990 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:54:32.413000 | orchestrator | 2026-02-03 06:54:32.413009 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:54:32.413017 | orchestrator | Tuesday 03 February 2026 06:54:19 +0000 (0:00:01.663) 0:59:32.282 ****** 2026-02-03 06:54:32.413026 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.413035 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:54:32.413044 | orchestrator | 2026-02-03 06:54:32.413052 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:54:32.413061 | orchestrator | Tuesday 03 February 2026 06:54:20 +0000 (0:00:01.360) 0:59:33.642 ****** 2026-02-03 06:54:32.413070 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:54:32.413078 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:54:32.413087 | orchestrator | 2026-02-03 06:54:32.413096 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:54:32.413104 | orchestrator | Tuesday 03 February 2026 06:54:21 +0000 (0:00:01.454) 0:59:35.096 ****** 2026-02-03 06:54:32.413119 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:54:32.413128 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:54:32.413137 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:54:32.413145 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.413154 | orchestrator | 2026-02-03 06:54:32.413163 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:54:32.413172 | orchestrator | Tuesday 03 February 2026 06:54:23 +0000 (0:00:01.614) 0:59:36.711 ****** 2026-02-03 06:54:32.413180 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:54:32.413189 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:54:32.413198 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:54:32.413207 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.413215 | orchestrator | 2026-02-03 06:54:32.413224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:54:32.413233 | orchestrator | Tuesday 03 February 2026 06:54:25 +0000 (0:00:01.612) 0:59:38.324 ****** 2026-02-03 06:54:32.413241 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:54:32.413250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:54:32.413259 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:54:32.413267 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:54:32.413276 | orchestrator | 2026-02-03 06:54:32.413285 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:54:32.413293 | orchestrator | Tuesday 03 February 2026 06:54:26 +0000 (0:00:01.543) 0:59:39.867 ****** 2026-02-03 06:54:32.413302 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:54:32.413311 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:54:32.413320 | orchestrator | 2026-02-03 06:54:32.413328 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:54:32.413337 | orchestrator | Tuesday 03 February 2026 06:54:28 +0000 (0:00:01.352) 0:59:41.220 ****** 2026-02-03 06:54:32.413346 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 06:54:32.413355 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 06:54:32.413363 | orchestrator | 2026-02-03 06:54:32.413372 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:54:32.413381 | orchestrator | Tuesday 03 February 2026 06:54:29 +0000 (0:00:01.888) 0:59:43.109 ****** 2026-02-03 06:54:32.413390 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:54:32.413398 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:54:32.413407 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:54:32.413416 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:54:32.413424 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-03 06:54:32.413433 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:54:32.413448 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:55:20.551352 | orchestrator | 2026-02-03 06:55:20.551435 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:55:20.551444 | orchestrator | Tuesday 03 February 2026 06:54:32 +0000 (0:00:02.465) 0:59:45.575 ****** 2026-02-03 06:55:20.551450 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:55:20.551457 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:55:20.551462 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:55:20.551467 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 06:55:20.551492 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-03 06:55:20.551498 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:55:20.551515 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:55:20.551520 | orchestrator | 2026-02-03 06:55:20.551525 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-03 06:55:20.551530 | orchestrator | Tuesday 03 February 2026 06:54:35 +0000 (0:00:02.821) 0:59:48.397 ****** 2026-02-03 06:55:20.551536 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.551542 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.551547 | orchestrator | 2026-02-03 06:55:20.551552 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:55:20.551557 | orchestrator | Tuesday 03 February 2026 06:54:36 +0000 (0:00:01.339) 0:59:49.736 ****** 2026-02-03 06:55:20.551562 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-5 2026-02-03 06:55:20.551568 | orchestrator | 2026-02-03 06:55:20.551574 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:55:20.551579 | orchestrator | Tuesday 03 February 2026 06:54:37 +0000 (0:00:01.279) 0:59:51.016 ****** 2026-02-03 06:55:20.551584 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-5 2026-02-03 06:55:20.551589 | orchestrator | 2026-02-03 06:55:20.551594 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:55:20.551599 | orchestrator | Tuesday 03 February 2026 06:54:39 +0000 (0:00:01.545) 0:59:52.561 ****** 2026-02-03 06:55:20.551604 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.551609 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.551614 | orchestrator | 2026-02-03 06:55:20.551619 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:55:20.551624 | orchestrator | Tuesday 03 February 2026 06:54:40 +0000 (0:00:01.322) 0:59:53.884 ****** 2026-02-03 06:55:20.551629 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.551635 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.551640 | orchestrator | 2026-02-03 06:55:20.551645 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:55:20.551650 | orchestrator | Tuesday 03 February 2026 06:54:42 +0000 (0:00:01.752) 0:59:55.637 ****** 2026-02-03 06:55:20.551655 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.551660 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.551665 | orchestrator | 2026-02-03 06:55:20.551670 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:55:20.551675 | orchestrator | Tuesday 03 February 2026 06:54:44 +0000 (0:00:02.170) 0:59:57.808 ****** 2026-02-03 06:55:20.551680 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.551685 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.551690 | orchestrator | 2026-02-03 06:55:20.551695 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:55:20.551700 | orchestrator | Tuesday 03 February 2026 06:54:46 +0000 (0:00:01.836) 0:59:59.645 ****** 2026-02-03 06:55:20.551706 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.551711 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.551716 | orchestrator | 2026-02-03 06:55:20.551721 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:55:20.551726 | orchestrator | Tuesday 03 February 2026 06:54:47 +0000 (0:00:01.370) 1:00:01.015 ****** 2026-02-03 06:55:20.551731 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.551736 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.551741 | orchestrator | 2026-02-03 06:55:20.551746 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:55:20.551751 | orchestrator | Tuesday 03 February 2026 06:54:49 +0000 (0:00:01.274) 1:00:02.290 ****** 2026-02-03 06:55:20.551756 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.551789 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.551797 | orchestrator | 2026-02-03 06:55:20.551802 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:55:20.551807 | orchestrator | Tuesday 03 February 2026 06:54:50 +0000 (0:00:01.379) 1:00:03.670 ****** 2026-02-03 06:55:20.551812 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.551817 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.551822 | orchestrator | 2026-02-03 06:55:20.551827 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:55:20.551832 | orchestrator | Tuesday 03 February 2026 06:54:52 +0000 (0:00:01.744) 1:00:05.414 ****** 2026-02-03 06:55:20.551837 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.551843 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.551848 | orchestrator | 2026-02-03 06:55:20.551853 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:55:20.551858 | orchestrator | Tuesday 03 February 2026 06:54:54 +0000 (0:00:01.779) 1:00:07.193 ****** 2026-02-03 06:55:20.551863 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.551868 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.551873 | orchestrator | 2026-02-03 06:55:20.551878 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:55:20.551883 | orchestrator | Tuesday 03 February 2026 06:54:55 +0000 (0:00:01.435) 1:00:08.629 ****** 2026-02-03 06:55:20.551888 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.551904 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.551910 | orchestrator | 2026-02-03 06:55:20.551915 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:55:20.551921 | orchestrator | Tuesday 03 February 2026 06:54:56 +0000 (0:00:01.340) 1:00:09.970 ****** 2026-02-03 06:55:20.551927 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.551934 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.551940 | orchestrator | 2026-02-03 06:55:20.551946 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:55:20.551952 | orchestrator | Tuesday 03 February 2026 06:54:58 +0000 (0:00:01.298) 1:00:11.268 ****** 2026-02-03 06:55:20.551958 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.551964 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.551969 | orchestrator | 2026-02-03 06:55:20.551975 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:55:20.551981 | orchestrator | Tuesday 03 February 2026 06:54:59 +0000 (0:00:01.356) 1:00:12.624 ****** 2026-02-03 06:55:20.551987 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.551996 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.552003 | orchestrator | 2026-02-03 06:55:20.552009 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:55:20.552015 | orchestrator | Tuesday 03 February 2026 06:55:01 +0000 (0:00:01.783) 1:00:14.408 ****** 2026-02-03 06:55:20.552021 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552027 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552033 | orchestrator | 2026-02-03 06:55:20.552039 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:55:20.552045 | orchestrator | Tuesday 03 February 2026 06:55:02 +0000 (0:00:01.444) 1:00:15.853 ****** 2026-02-03 06:55:20.552051 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552057 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552063 | orchestrator | 2026-02-03 06:55:20.552069 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:55:20.552075 | orchestrator | Tuesday 03 February 2026 06:55:04 +0000 (0:00:01.420) 1:00:17.273 ****** 2026-02-03 06:55:20.552081 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552087 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552093 | orchestrator | 2026-02-03 06:55:20.552099 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:55:20.552105 | orchestrator | Tuesday 03 February 2026 06:55:05 +0000 (0:00:01.337) 1:00:18.611 ****** 2026-02-03 06:55:20.552115 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.552121 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.552127 | orchestrator | 2026-02-03 06:55:20.552135 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:55:20.552144 | orchestrator | Tuesday 03 February 2026 06:55:06 +0000 (0:00:01.401) 1:00:20.013 ****** 2026-02-03 06:55:20.552152 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:55:20.552165 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:55:20.552176 | orchestrator | 2026-02-03 06:55:20.552184 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:55:20.552192 | orchestrator | Tuesday 03 February 2026 06:55:08 +0000 (0:00:01.553) 1:00:21.567 ****** 2026-02-03 06:55:20.552200 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552208 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552216 | orchestrator | 2026-02-03 06:55:20.552224 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:55:20.552231 | orchestrator | Tuesday 03 February 2026 06:55:09 +0000 (0:00:01.308) 1:00:22.876 ****** 2026-02-03 06:55:20.552239 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552247 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552256 | orchestrator | 2026-02-03 06:55:20.552264 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:55:20.552272 | orchestrator | Tuesday 03 February 2026 06:55:11 +0000 (0:00:01.353) 1:00:24.230 ****** 2026-02-03 06:55:20.552281 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552289 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552298 | orchestrator | 2026-02-03 06:55:20.552306 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:55:20.552315 | orchestrator | Tuesday 03 February 2026 06:55:12 +0000 (0:00:01.307) 1:00:25.537 ****** 2026-02-03 06:55:20.552323 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552332 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552340 | orchestrator | 2026-02-03 06:55:20.552348 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:55:20.552357 | orchestrator | Tuesday 03 February 2026 06:55:13 +0000 (0:00:01.468) 1:00:27.006 ****** 2026-02-03 06:55:20.552366 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552374 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552383 | orchestrator | 2026-02-03 06:55:20.552391 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:55:20.552400 | orchestrator | Tuesday 03 February 2026 06:55:15 +0000 (0:00:01.273) 1:00:28.279 ****** 2026-02-03 06:55:20.552408 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552415 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552420 | orchestrator | 2026-02-03 06:55:20.552425 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:55:20.552430 | orchestrator | Tuesday 03 February 2026 06:55:16 +0000 (0:00:01.273) 1:00:29.553 ****** 2026-02-03 06:55:20.552435 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552440 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552445 | orchestrator | 2026-02-03 06:55:20.552450 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:55:20.552455 | orchestrator | Tuesday 03 February 2026 06:55:17 +0000 (0:00:01.462) 1:00:31.015 ****** 2026-02-03 06:55:20.552460 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552467 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552476 | orchestrator | 2026-02-03 06:55:20.552483 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:55:20.552488 | orchestrator | Tuesday 03 February 2026 06:55:19 +0000 (0:00:01.295) 1:00:32.311 ****** 2026-02-03 06:55:20.552493 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:55:20.552498 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:55:20.552503 | orchestrator | 2026-02-03 06:55:20.552514 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:56:08.197625 | orchestrator | Tuesday 03 February 2026 06:55:20 +0000 (0:00:01.416) 1:00:33.727 ****** 2026-02-03 06:56:08.197734 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.197747 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.197754 | orchestrator | 2026-02-03 06:56:08.197817 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:56:08.197829 | orchestrator | Tuesday 03 February 2026 06:55:21 +0000 (0:00:01.290) 1:00:35.018 ****** 2026-02-03 06:56:08.197839 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.197849 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.197860 | orchestrator | 2026-02-03 06:56:08.197866 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:56:08.197872 | orchestrator | Tuesday 03 February 2026 06:55:23 +0000 (0:00:01.301) 1:00:36.320 ****** 2026-02-03 06:56:08.197877 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.197883 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.197888 | orchestrator | 2026-02-03 06:56:08.197908 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:56:08.197913 | orchestrator | Tuesday 03 February 2026 06:55:24 +0000 (0:00:01.305) 1:00:37.625 ****** 2026-02-03 06:56:08.197919 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:56:08.197926 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:56:08.197931 | orchestrator | 2026-02-03 06:56:08.197941 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:56:08.197950 | orchestrator | Tuesday 03 February 2026 06:55:27 +0000 (0:00:02.623) 1:00:40.248 ****** 2026-02-03 06:56:08.197960 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:56:08.197968 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:56:08.197978 | orchestrator | 2026-02-03 06:56:08.197988 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:56:08.197998 | orchestrator | Tuesday 03 February 2026 06:55:29 +0000 (0:00:02.552) 1:00:42.801 ****** 2026-02-03 06:56:08.198008 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-5 2026-02-03 06:56:08.198053 | orchestrator | 2026-02-03 06:56:08.198059 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:56:08.198065 | orchestrator | Tuesday 03 February 2026 06:55:30 +0000 (0:00:01.244) 1:00:44.045 ****** 2026-02-03 06:56:08.198071 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198077 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198082 | orchestrator | 2026-02-03 06:56:08.198088 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 06:56:08.198094 | orchestrator | Tuesday 03 February 2026 06:55:32 +0000 (0:00:01.333) 1:00:45.379 ****** 2026-02-03 06:56:08.198099 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198107 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198116 | orchestrator | 2026-02-03 06:56:08.198125 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 06:56:08.198134 | orchestrator | Tuesday 03 February 2026 06:55:33 +0000 (0:00:01.261) 1:00:46.640 ****** 2026-02-03 06:56:08.198142 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:56:08.198150 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 06:56:08.198159 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:56:08.198168 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 06:56:08.198178 | orchestrator | 2026-02-03 06:56:08.198188 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 06:56:08.198198 | orchestrator | Tuesday 03 February 2026 06:55:35 +0000 (0:00:02.137) 1:00:48.777 ****** 2026-02-03 06:56:08.198204 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:56:08.198211 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:56:08.198218 | orchestrator | 2026-02-03 06:56:08.198224 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 06:56:08.198250 | orchestrator | Tuesday 03 February 2026 06:55:37 +0000 (0:00:01.720) 1:00:50.497 ****** 2026-02-03 06:56:08.198256 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198262 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198268 | orchestrator | 2026-02-03 06:56:08.198275 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 06:56:08.198281 | orchestrator | Tuesday 03 February 2026 06:55:38 +0000 (0:00:01.290) 1:00:51.788 ****** 2026-02-03 06:56:08.198287 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198294 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198300 | orchestrator | 2026-02-03 06:56:08.198306 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 06:56:08.198312 | orchestrator | Tuesday 03 February 2026 06:55:39 +0000 (0:00:01.375) 1:00:53.164 ****** 2026-02-03 06:56:08.198318 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198325 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198331 | orchestrator | 2026-02-03 06:56:08.198337 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 06:56:08.198343 | orchestrator | Tuesday 03 February 2026 06:55:41 +0000 (0:00:01.340) 1:00:54.505 ****** 2026-02-03 06:56:08.198350 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-5 2026-02-03 06:56:08.198356 | orchestrator | 2026-02-03 06:56:08.198363 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 06:56:08.198369 | orchestrator | Tuesday 03 February 2026 06:55:42 +0000 (0:00:01.298) 1:00:55.803 ****** 2026-02-03 06:56:08.198375 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:56:08.198381 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:56:08.198388 | orchestrator | 2026-02-03 06:56:08.198397 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 06:56:08.198407 | orchestrator | Tuesday 03 February 2026 06:55:44 +0000 (0:00:02.307) 1:00:58.110 ****** 2026-02-03 06:56:08.198417 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:56:08.198442 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:56:08.198452 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:56:08.198461 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198470 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 06:56:08.198479 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 06:56:08.198488 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 06:56:08.198497 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198505 | orchestrator | 2026-02-03 06:56:08.198514 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 06:56:08.198522 | orchestrator | Tuesday 03 February 2026 06:55:46 +0000 (0:00:01.402) 1:00:59.513 ****** 2026-02-03 06:56:08.198530 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198544 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198554 | orchestrator | 2026-02-03 06:56:08.198562 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 06:56:08.198570 | orchestrator | Tuesday 03 February 2026 06:55:47 +0000 (0:00:01.316) 1:01:00.829 ****** 2026-02-03 06:56:08.198579 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198587 | orchestrator | 2026-02-03 06:56:08.198597 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 06:56:08.198605 | orchestrator | Tuesday 03 February 2026 06:55:48 +0000 (0:00:01.212) 1:01:02.042 ****** 2026-02-03 06:56:08.198614 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198622 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198630 | orchestrator | 2026-02-03 06:56:08.198639 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 06:56:08.198657 | orchestrator | Tuesday 03 February 2026 06:55:50 +0000 (0:00:01.409) 1:01:03.452 ****** 2026-02-03 06:56:08.198666 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198675 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198684 | orchestrator | 2026-02-03 06:56:08.198692 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 06:56:08.198701 | orchestrator | Tuesday 03 February 2026 06:55:51 +0000 (0:00:01.354) 1:01:04.806 ****** 2026-02-03 06:56:08.198710 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198720 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198729 | orchestrator | 2026-02-03 06:56:08.198737 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 06:56:08.198746 | orchestrator | Tuesday 03 February 2026 06:55:52 +0000 (0:00:01.255) 1:01:06.062 ****** 2026-02-03 06:56:08.198756 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:56:08.198785 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:56:08.198794 | orchestrator | 2026-02-03 06:56:08.198802 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 06:56:08.198812 | orchestrator | Tuesday 03 February 2026 06:55:55 +0000 (0:00:02.991) 1:01:09.053 ****** 2026-02-03 06:56:08.198821 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:56:08.198829 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:56:08.198838 | orchestrator | 2026-02-03 06:56:08.198846 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 06:56:08.198855 | orchestrator | Tuesday 03 February 2026 06:55:57 +0000 (0:00:01.361) 1:01:10.414 ****** 2026-02-03 06:56:08.198865 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-5 2026-02-03 06:56:08.198872 | orchestrator | 2026-02-03 06:56:08.198877 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 06:56:08.198882 | orchestrator | Tuesday 03 February 2026 06:55:58 +0000 (0:00:01.259) 1:01:11.673 ****** 2026-02-03 06:56:08.198888 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198893 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198899 | orchestrator | 2026-02-03 06:56:08.198904 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 06:56:08.198909 | orchestrator | Tuesday 03 February 2026 06:55:59 +0000 (0:00:01.331) 1:01:13.005 ****** 2026-02-03 06:56:08.198915 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198920 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198925 | orchestrator | 2026-02-03 06:56:08.198931 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 06:56:08.198936 | orchestrator | Tuesday 03 February 2026 06:56:01 +0000 (0:00:01.351) 1:01:14.357 ****** 2026-02-03 06:56:08.198941 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198947 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198952 | orchestrator | 2026-02-03 06:56:08.198957 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 06:56:08.198963 | orchestrator | Tuesday 03 February 2026 06:56:02 +0000 (0:00:01.328) 1:01:15.685 ****** 2026-02-03 06:56:08.198968 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.198974 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.198979 | orchestrator | 2026-02-03 06:56:08.198984 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 06:56:08.198990 | orchestrator | Tuesday 03 February 2026 06:56:04 +0000 (0:00:01.727) 1:01:17.413 ****** 2026-02-03 06:56:08.198995 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.199000 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.199006 | orchestrator | 2026-02-03 06:56:08.199011 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 06:56:08.199016 | orchestrator | Tuesday 03 February 2026 06:56:05 +0000 (0:00:01.393) 1:01:18.807 ****** 2026-02-03 06:56:08.199022 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.199027 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.199039 | orchestrator | 2026-02-03 06:56:08.199045 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 06:56:08.199050 | orchestrator | Tuesday 03 February 2026 06:56:06 +0000 (0:00:01.247) 1:01:20.054 ****** 2026-02-03 06:56:08.199055 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:08.199061 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:08.199066 | orchestrator | 2026-02-03 06:56:08.199080 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 06:56:52.543490 | orchestrator | Tuesday 03 February 2026 06:56:08 +0000 (0:00:01.314) 1:01:21.370 ****** 2026-02-03 06:56:52.543603 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.543620 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.543631 | orchestrator | 2026-02-03 06:56:52.543642 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 06:56:52.543652 | orchestrator | Tuesday 03 February 2026 06:56:09 +0000 (0:00:01.249) 1:01:22.619 ****** 2026-02-03 06:56:52.543662 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:56:52.543673 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:56:52.543683 | orchestrator | 2026-02-03 06:56:52.543693 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 06:56:52.543703 | orchestrator | Tuesday 03 February 2026 06:56:10 +0000 (0:00:01.568) 1:01:24.188 ****** 2026-02-03 06:56:52.543728 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-5 2026-02-03 06:56:52.543743 | orchestrator | 2026-02-03 06:56:52.543783 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 06:56:52.543799 | orchestrator | Tuesday 03 February 2026 06:56:12 +0000 (0:00:01.290) 1:01:25.478 ****** 2026-02-03 06:56:52.543815 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-03 06:56:52.543831 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-03 06:56:52.543848 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-03 06:56:52.543863 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-03 06:56:52.543879 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-03 06:56:52.543896 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-03 06:56:52.543911 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-03 06:56:52.543926 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-03 06:56:52.543942 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-03 06:56:52.543958 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-03 06:56:52.543973 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-03 06:56:52.543989 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-03 06:56:52.544007 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-03 06:56:52.544026 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-03 06:56:52.544045 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:56:52.544064 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-03 06:56:52.544084 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:56:52.544105 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 06:56:52.544125 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:56:52.544144 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 06:56:52.544163 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:56:52.544181 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 06:56:52.544199 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:56:52.544218 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 06:56:52.544237 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:56:52.544281 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 06:56:52.544300 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:56:52.544318 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 06:56:52.544337 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-03 06:56:52.544354 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-03 06:56:52.544369 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-03 06:56:52.544385 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-03 06:56:52.544401 | orchestrator | 2026-02-03 06:56:52.544418 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 06:56:52.544434 | orchestrator | Tuesday 03 February 2026 06:56:19 +0000 (0:00:07.294) 1:01:32.772 ****** 2026-02-03 06:56:52.544450 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-5 2026-02-03 06:56:52.544466 | orchestrator | 2026-02-03 06:56:52.544483 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-03 06:56:52.544498 | orchestrator | Tuesday 03 February 2026 06:56:21 +0000 (0:00:01.461) 1:01:34.234 ****** 2026-02-03 06:56:52.544515 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 06:56:52.544532 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 06:56:52.544546 | orchestrator | 2026-02-03 06:56:52.544562 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-03 06:56:52.544601 | orchestrator | Tuesday 03 February 2026 06:56:22 +0000 (0:00:01.765) 1:01:36.000 ****** 2026-02-03 06:56:52.544618 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 06:56:52.544634 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 06:56:52.544649 | orchestrator | 2026-02-03 06:56:52.544664 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 06:56:52.544702 | orchestrator | Tuesday 03 February 2026 06:56:25 +0000 (0:00:02.877) 1:01:38.878 ****** 2026-02-03 06:56:52.544717 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.544732 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.544747 | orchestrator | 2026-02-03 06:56:52.544852 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 06:56:52.544868 | orchestrator | Tuesday 03 February 2026 06:56:26 +0000 (0:00:01.308) 1:01:40.186 ****** 2026-02-03 06:56:52.544882 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.544899 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.544915 | orchestrator | 2026-02-03 06:56:52.544930 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 06:56:52.544946 | orchestrator | Tuesday 03 February 2026 06:56:28 +0000 (0:00:01.395) 1:01:41.581 ****** 2026-02-03 06:56:52.544961 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.544977 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.544993 | orchestrator | 2026-02-03 06:56:52.545021 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 06:56:52.545038 | orchestrator | Tuesday 03 February 2026 06:56:29 +0000 (0:00:01.335) 1:01:42.917 ****** 2026-02-03 06:56:52.545054 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.545072 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.545088 | orchestrator | 2026-02-03 06:56:52.545104 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 06:56:52.545121 | orchestrator | Tuesday 03 February 2026 06:56:31 +0000 (0:00:01.341) 1:01:44.259 ****** 2026-02-03 06:56:52.545137 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.545169 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.545184 | orchestrator | 2026-02-03 06:56:52.545200 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 06:56:52.545214 | orchestrator | Tuesday 03 February 2026 06:56:32 +0000 (0:00:01.388) 1:01:45.647 ****** 2026-02-03 06:56:52.545230 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.545247 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.545262 | orchestrator | 2026-02-03 06:56:52.545280 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 06:56:52.545297 | orchestrator | Tuesday 03 February 2026 06:56:33 +0000 (0:00:01.401) 1:01:47.049 ****** 2026-02-03 06:56:52.545313 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.545327 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.545337 | orchestrator | 2026-02-03 06:56:52.545347 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 06:56:52.545356 | orchestrator | Tuesday 03 February 2026 06:56:35 +0000 (0:00:01.304) 1:01:48.354 ****** 2026-02-03 06:56:52.545364 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.545372 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.545380 | orchestrator | 2026-02-03 06:56:52.545388 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 06:56:52.545396 | orchestrator | Tuesday 03 February 2026 06:56:36 +0000 (0:00:01.390) 1:01:49.745 ****** 2026-02-03 06:56:52.545404 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.545411 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.545419 | orchestrator | 2026-02-03 06:56:52.545427 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 06:56:52.545435 | orchestrator | Tuesday 03 February 2026 06:56:37 +0000 (0:00:01.318) 1:01:51.063 ****** 2026-02-03 06:56:52.545443 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.545451 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.545458 | orchestrator | 2026-02-03 06:56:52.545466 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 06:56:52.545474 | orchestrator | Tuesday 03 February 2026 06:56:39 +0000 (0:00:01.315) 1:01:52.379 ****** 2026-02-03 06:56:52.545482 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:56:52.545490 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:56:52.545498 | orchestrator | 2026-02-03 06:56:52.545506 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 06:56:52.545514 | orchestrator | Tuesday 03 February 2026 06:56:40 +0000 (0:00:01.599) 1:01:53.979 ****** 2026-02-03 06:56:52.545522 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-03 06:56:52.545530 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-03 06:56:52.545538 | orchestrator | 2026-02-03 06:56:52.545546 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 06:56:52.545554 | orchestrator | Tuesday 03 February 2026 06:56:45 +0000 (0:00:04.917) 1:01:58.896 ****** 2026-02-03 06:56:52.545562 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 06:56:52.545570 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 06:56:52.545578 | orchestrator | 2026-02-03 06:56:52.545586 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 06:56:52.545594 | orchestrator | Tuesday 03 February 2026 06:56:47 +0000 (0:00:01.503) 1:02:00.399 ****** 2026-02-03 06:56:52.545604 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-03 06:56:52.545634 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-03 06:57:44.567338 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-03 06:57:44.567461 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-03 06:57:44.567479 | orchestrator | 2026-02-03 06:57:44.567493 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 06:57:44.567506 | orchestrator | Tuesday 03 February 2026 06:56:52 +0000 (0:00:05.317) 1:02:05.717 ****** 2026-02-03 06:57:44.567517 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.567529 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:57:44.567540 | orchestrator | 2026-02-03 06:57:44.567552 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 06:57:44.567563 | orchestrator | Tuesday 03 February 2026 06:56:53 +0000 (0:00:01.336) 1:02:07.054 ****** 2026-02-03 06:57:44.567574 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.567585 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:57:44.567596 | orchestrator | 2026-02-03 06:57:44.567607 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:57:44.567619 | orchestrator | Tuesday 03 February 2026 06:56:55 +0000 (0:00:01.451) 1:02:08.505 ****** 2026-02-03 06:57:44.567630 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.567641 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:57:44.567652 | orchestrator | 2026-02-03 06:57:44.567663 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:57:44.567673 | orchestrator | Tuesday 03 February 2026 06:56:56 +0000 (0:00:01.429) 1:02:09.935 ****** 2026-02-03 06:57:44.567684 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.567695 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:57:44.567706 | orchestrator | 2026-02-03 06:57:44.567717 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:57:44.567728 | orchestrator | Tuesday 03 February 2026 06:56:58 +0000 (0:00:01.289) 1:02:11.225 ****** 2026-02-03 06:57:44.567739 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.567821 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:57:44.567838 | orchestrator | 2026-02-03 06:57:44.567851 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:57:44.567909 | orchestrator | Tuesday 03 February 2026 06:56:59 +0000 (0:00:01.405) 1:02:12.630 ****** 2026-02-03 06:57:44.567922 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:57:44.567937 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:57:44.567950 | orchestrator | 2026-02-03 06:57:44.567963 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:57:44.567976 | orchestrator | Tuesday 03 February 2026 06:57:01 +0000 (0:00:01.928) 1:02:14.558 ****** 2026-02-03 06:57:44.567989 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:57:44.568002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:57:44.568014 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:57:44.568026 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.568039 | orchestrator | 2026-02-03 06:57:44.568078 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:57:44.568091 | orchestrator | Tuesday 03 February 2026 06:57:02 +0000 (0:00:01.579) 1:02:16.138 ****** 2026-02-03 06:57:44.568104 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:57:44.568117 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:57:44.568129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:57:44.568143 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.568156 | orchestrator | 2026-02-03 06:57:44.568169 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:57:44.568181 | orchestrator | Tuesday 03 February 2026 06:57:04 +0000 (0:00:01.497) 1:02:17.635 ****** 2026-02-03 06:57:44.568191 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 06:57:44.568202 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 06:57:44.568213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 06:57:44.568224 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.568235 | orchestrator | 2026-02-03 06:57:44.568245 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:57:44.568256 | orchestrator | Tuesday 03 February 2026 06:57:05 +0000 (0:00:01.518) 1:02:19.153 ****** 2026-02-03 06:57:44.568267 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:57:44.568278 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:57:44.568289 | orchestrator | 2026-02-03 06:57:44.568300 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:57:44.568310 | orchestrator | Tuesday 03 February 2026 06:57:07 +0000 (0:00:01.355) 1:02:20.509 ****** 2026-02-03 06:57:44.568321 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 06:57:44.568332 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 06:57:44.568343 | orchestrator | 2026-02-03 06:57:44.568353 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 06:57:44.568364 | orchestrator | Tuesday 03 February 2026 06:57:08 +0000 (0:00:01.634) 1:02:22.143 ****** 2026-02-03 06:57:44.568375 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:57:44.568386 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:57:44.568397 | orchestrator | 2026-02-03 06:57:44.568426 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-03 06:57:44.568438 | orchestrator | Tuesday 03 February 2026 06:57:11 +0000 (0:00:02.192) 1:02:24.336 ****** 2026-02-03 06:57:44.568457 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.568475 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:57:44.568492 | orchestrator | 2026-02-03 06:57:44.568510 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-03 06:57:44.568527 | orchestrator | Tuesday 03 February 2026 06:57:12 +0000 (0:00:01.355) 1:02:25.692 ****** 2026-02-03 06:57:44.568544 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-5 2026-02-03 06:57:44.568563 | orchestrator | 2026-02-03 06:57:44.568581 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-03 06:57:44.568607 | orchestrator | Tuesday 03 February 2026 06:57:13 +0000 (0:00:01.305) 1:02:26.998 ****** 2026-02-03 06:57:44.568627 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-03 06:57:44.568646 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-03 06:57:44.568665 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-03 06:57:44.568681 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-03 06:57:44.568692 | orchestrator | 2026-02-03 06:57:44.568703 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-03 06:57:44.568714 | orchestrator | Tuesday 03 February 2026 06:57:16 +0000 (0:00:02.259) 1:02:29.258 ****** 2026-02-03 06:57:44.568724 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 06:57:44.568747 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-03 06:57:44.568789 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 06:57:44.568869 | orchestrator | 2026-02-03 06:57:44.568881 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-03 06:57:44.568892 | orchestrator | Tuesday 03 February 2026 06:57:19 +0000 (0:00:03.306) 1:02:32.564 ****** 2026-02-03 06:57:44.568902 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-03 06:57:44.568913 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-03 06:57:44.568925 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:57:44.568936 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-03 06:57:44.568947 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-03 06:57:44.568957 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:57:44.568968 | orchestrator | 2026-02-03 06:57:44.568979 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-03 06:57:44.568990 | orchestrator | Tuesday 03 February 2026 06:57:21 +0000 (0:00:02.294) 1:02:34.859 ****** 2026-02-03 06:57:44.569001 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:57:44.569012 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:57:44.569022 | orchestrator | 2026-02-03 06:57:44.569033 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-03 06:57:44.569044 | orchestrator | Tuesday 03 February 2026 06:57:23 +0000 (0:00:02.172) 1:02:37.032 ****** 2026-02-03 06:57:44.569055 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.569066 | orchestrator | skipping: [testbed-node-5] 2026-02-03 06:57:44.569077 | orchestrator | 2026-02-03 06:57:44.569088 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-03 06:57:44.569099 | orchestrator | Tuesday 03 February 2026 06:57:25 +0000 (0:00:01.393) 1:02:38.425 ****** 2026-02-03 06:57:44.569110 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-5 2026-02-03 06:57:44.569121 | orchestrator | 2026-02-03 06:57:44.569132 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-03 06:57:44.569143 | orchestrator | Tuesday 03 February 2026 06:57:26 +0000 (0:00:01.350) 1:02:39.776 ****** 2026-02-03 06:57:44.569154 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-5 2026-02-03 06:57:44.569165 | orchestrator | 2026-02-03 06:57:44.569176 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-03 06:57:44.569186 | orchestrator | Tuesday 03 February 2026 06:57:27 +0000 (0:00:01.306) 1:02:41.083 ****** 2026-02-03 06:57:44.569197 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:57:44.569208 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:57:44.569219 | orchestrator | 2026-02-03 06:57:44.569230 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-03 06:57:44.569241 | orchestrator | Tuesday 03 February 2026 06:57:30 +0000 (0:00:02.441) 1:02:43.525 ****** 2026-02-03 06:57:44.569252 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:57:44.569263 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:57:44.569274 | orchestrator | 2026-02-03 06:57:44.569285 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-03 06:57:44.569296 | orchestrator | Tuesday 03 February 2026 06:57:32 +0000 (0:00:02.131) 1:02:45.656 ****** 2026-02-03 06:57:44.569307 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:57:44.569317 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:57:44.569328 | orchestrator | 2026-02-03 06:57:44.569339 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-03 06:57:44.569350 | orchestrator | Tuesday 03 February 2026 06:57:34 +0000 (0:00:02.462) 1:02:48.118 ****** 2026-02-03 06:57:44.569361 | orchestrator | changed: [testbed-node-5] 2026-02-03 06:57:44.569371 | orchestrator | changed: [testbed-node-4] 2026-02-03 06:57:44.569382 | orchestrator | 2026-02-03 06:57:44.569393 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-03 06:57:44.569404 | orchestrator | Tuesday 03 February 2026 06:57:38 +0000 (0:00:03.656) 1:02:51.775 ****** 2026-02-03 06:57:44.569423 | orchestrator | ok: [testbed-node-4] 2026-02-03 06:57:44.569434 | orchestrator | ok: [testbed-node-5] 2026-02-03 06:57:44.569445 | orchestrator | 2026-02-03 06:57:44.569456 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-03 06:57:44.569467 | orchestrator | Tuesday 03 February 2026 06:57:40 +0000 (0:00:01.865) 1:02:53.640 ****** 2026-02-03 06:57:44.569478 | orchestrator | skipping: [testbed-node-4] 2026-02-03 06:57:44.569500 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:58:09.546248 | orchestrator | 2026-02-03 06:58:09.546361 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-03 06:58:09.546379 | orchestrator | 2026-02-03 06:58:09.546392 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 06:58:09.546404 | orchestrator | Tuesday 03 February 2026 06:57:44 +0000 (0:00:04.100) 1:02:57.741 ****** 2026-02-03 06:58:09.546415 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-03 06:58:09.546427 | orchestrator | 2026-02-03 06:58:09.546439 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 06:58:09.546450 | orchestrator | Tuesday 03 February 2026 06:57:45 +0000 (0:00:01.193) 1:02:58.934 ****** 2026-02-03 06:58:09.546480 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:09.546492 | orchestrator | 2026-02-03 06:58:09.546504 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 06:58:09.546515 | orchestrator | Tuesday 03 February 2026 06:57:47 +0000 (0:00:01.514) 1:03:00.449 ****** 2026-02-03 06:58:09.546526 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:09.546537 | orchestrator | 2026-02-03 06:58:09.546548 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 06:58:09.546559 | orchestrator | Tuesday 03 February 2026 06:57:48 +0000 (0:00:01.179) 1:03:01.628 ****** 2026-02-03 06:58:09.546570 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:09.546581 | orchestrator | 2026-02-03 06:58:09.546592 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 06:58:09.546604 | orchestrator | Tuesday 03 February 2026 06:57:50 +0000 (0:00:01.589) 1:03:03.217 ****** 2026-02-03 06:58:09.546615 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:09.546626 | orchestrator | 2026-02-03 06:58:09.546637 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 06:58:09.546648 | orchestrator | Tuesday 03 February 2026 06:57:51 +0000 (0:00:01.208) 1:03:04.425 ****** 2026-02-03 06:58:09.546659 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:09.546670 | orchestrator | 2026-02-03 06:58:09.546682 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 06:58:09.546693 | orchestrator | Tuesday 03 February 2026 06:57:52 +0000 (0:00:01.248) 1:03:05.674 ****** 2026-02-03 06:58:09.546704 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:09.546715 | orchestrator | 2026-02-03 06:58:09.546726 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 06:58:09.546737 | orchestrator | Tuesday 03 February 2026 06:57:53 +0000 (0:00:01.260) 1:03:06.934 ****** 2026-02-03 06:58:09.546776 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:09.546790 | orchestrator | 2026-02-03 06:58:09.546803 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 06:58:09.546816 | orchestrator | Tuesday 03 February 2026 06:57:54 +0000 (0:00:01.228) 1:03:08.162 ****** 2026-02-03 06:58:09.546829 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:09.546842 | orchestrator | 2026-02-03 06:58:09.546855 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 06:58:09.546869 | orchestrator | Tuesday 03 February 2026 06:57:56 +0000 (0:00:01.383) 1:03:09.546 ****** 2026-02-03 06:58:09.546882 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:58:09.546895 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:58:09.546908 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:58:09.546945 | orchestrator | 2026-02-03 06:58:09.546957 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 06:58:09.546968 | orchestrator | Tuesday 03 February 2026 06:57:58 +0000 (0:00:02.173) 1:03:11.719 ****** 2026-02-03 06:58:09.546979 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:09.546990 | orchestrator | 2026-02-03 06:58:09.547001 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 06:58:09.547011 | orchestrator | Tuesday 03 February 2026 06:58:00 +0000 (0:00:01.498) 1:03:13.218 ****** 2026-02-03 06:58:09.547022 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:58:09.547033 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:58:09.547044 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:58:09.547055 | orchestrator | 2026-02-03 06:58:09.547066 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 06:58:09.547076 | orchestrator | Tuesday 03 February 2026 06:58:03 +0000 (0:00:03.679) 1:03:16.898 ****** 2026-02-03 06:58:09.547087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 06:58:09.547100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 06:58:09.547111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 06:58:09.547122 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:09.547133 | orchestrator | 2026-02-03 06:58:09.547144 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 06:58:09.547155 | orchestrator | Tuesday 03 February 2026 06:58:05 +0000 (0:00:01.667) 1:03:18.565 ****** 2026-02-03 06:58:09.547168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 06:58:09.547181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 06:58:09.547210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 06:58:09.547223 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:09.547234 | orchestrator | 2026-02-03 06:58:09.547245 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 06:58:09.547256 | orchestrator | Tuesday 03 February 2026 06:58:07 +0000 (0:00:01.750) 1:03:20.316 ****** 2026-02-03 06:58:09.547275 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:09.547289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:09.547300 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:09.547321 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:09.547332 | orchestrator | 2026-02-03 06:58:09.547348 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 06:58:09.547366 | orchestrator | Tuesday 03 February 2026 06:58:08 +0000 (0:00:01.179) 1:03:21.496 ****** 2026-02-03 06:58:09.547389 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 06:58:01.061350', 'end': '2026-02-03 06:58:01.101875', 'delta': '0:00:00.040525', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 06:58:09.547412 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 06:58:01.712103', 'end': '2026-02-03 06:58:01.767313', 'delta': '0:00:00.055210', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 06:58:09.547433 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 06:58:02.370494', 'end': '2026-02-03 06:58:02.417642', 'delta': '0:00:00.047148', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 06:58:09.547447 | orchestrator | 2026-02-03 06:58:09.547466 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 06:58:28.483301 | orchestrator | Tuesday 03 February 2026 06:58:09 +0000 (0:00:01.223) 1:03:22.719 ****** 2026-02-03 06:58:28.483411 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:28.483427 | orchestrator | 2026-02-03 06:58:28.483439 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 06:58:28.483450 | orchestrator | Tuesday 03 February 2026 06:58:10 +0000 (0:00:01.333) 1:03:24.052 ****** 2026-02-03 06:58:28.483460 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:28.483470 | orchestrator | 2026-02-03 06:58:28.483481 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 06:58:28.483508 | orchestrator | Tuesday 03 February 2026 06:58:12 +0000 (0:00:01.339) 1:03:25.392 ****** 2026-02-03 06:58:28.483518 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:28.483528 | orchestrator | 2026-02-03 06:58:28.483538 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 06:58:28.483548 | orchestrator | Tuesday 03 February 2026 06:58:13 +0000 (0:00:01.178) 1:03:26.570 ****** 2026-02-03 06:58:28.483576 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-03 06:58:28.483587 | orchestrator | 2026-02-03 06:58:28.483597 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:58:28.483607 | orchestrator | Tuesday 03 February 2026 06:58:15 +0000 (0:00:02.182) 1:03:28.753 ****** 2026-02-03 06:58:28.483616 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:28.483626 | orchestrator | 2026-02-03 06:58:28.483636 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 06:58:28.483658 | orchestrator | Tuesday 03 February 2026 06:58:16 +0000 (0:00:01.248) 1:03:30.001 ****** 2026-02-03 06:58:28.483668 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:28.483677 | orchestrator | 2026-02-03 06:58:28.483687 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 06:58:28.483697 | orchestrator | Tuesday 03 February 2026 06:58:17 +0000 (0:00:01.162) 1:03:31.164 ****** 2026-02-03 06:58:28.483706 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:28.483716 | orchestrator | 2026-02-03 06:58:28.483725 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 06:58:28.483735 | orchestrator | Tuesday 03 February 2026 06:58:19 +0000 (0:00:01.302) 1:03:32.466 ****** 2026-02-03 06:58:28.483762 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:28.483772 | orchestrator | 2026-02-03 06:58:28.483782 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 06:58:28.483792 | orchestrator | Tuesday 03 February 2026 06:58:20 +0000 (0:00:01.277) 1:03:33.744 ****** 2026-02-03 06:58:28.483801 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:28.483811 | orchestrator | 2026-02-03 06:58:28.483820 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 06:58:28.483830 | orchestrator | Tuesday 03 February 2026 06:58:21 +0000 (0:00:01.148) 1:03:34.892 ****** 2026-02-03 06:58:28.483842 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:28.483853 | orchestrator | 2026-02-03 06:58:28.483865 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 06:58:28.483875 | orchestrator | Tuesday 03 February 2026 06:58:23 +0000 (0:00:01.511) 1:03:36.404 ****** 2026-02-03 06:58:28.483886 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:28.483898 | orchestrator | 2026-02-03 06:58:28.483908 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 06:58:28.483920 | orchestrator | Tuesday 03 February 2026 06:58:24 +0000 (0:00:01.236) 1:03:37.641 ****** 2026-02-03 06:58:28.483930 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:28.483941 | orchestrator | 2026-02-03 06:58:28.483953 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 06:58:28.483964 | orchestrator | Tuesday 03 February 2026 06:58:25 +0000 (0:00:01.297) 1:03:38.939 ****** 2026-02-03 06:58:28.483975 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:28.483986 | orchestrator | 2026-02-03 06:58:28.483997 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 06:58:28.484009 | orchestrator | Tuesday 03 February 2026 06:58:26 +0000 (0:00:01.207) 1:03:40.147 ****** 2026-02-03 06:58:28.484020 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:58:28.484032 | orchestrator | 2026-02-03 06:58:28.484043 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 06:58:28.484054 | orchestrator | Tuesday 03 February 2026 06:58:28 +0000 (0:00:01.249) 1:03:41.397 ****** 2026-02-03 06:58:28.484068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:58:28.484085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'uuids': ['027247ae-00a3-443e-9633-8d8391a7da1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T']}})  2026-02-03 06:58:28.484131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '30942d1f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:58:28.484146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29']}})  2026-02-03 06:58:28.484159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:58:28.484173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:58:28.484186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 06:58:28.484198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:58:28.484209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp', 'dm-uuid-CRYPT-LUKS2-51cdba44ba2f44e4a9ba680ba42622f2-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:58:28.484236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:58:29.983648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'uuids': ['51cdba44-ba2f-44e4-a9ba-680ba42622f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp']}})  2026-02-03 06:58:29.983829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd']}})  2026-02-03 06:58:29.983850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:58:29.983870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26fa6d1d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 06:58:29.983936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:58:29.983950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 06:58:29.983962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T', 'dm-uuid-CRYPT-LUKS2-027247ae00a3443e96338d8391a7da1a-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 06:58:29.983976 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:58:29.983989 | orchestrator | 2026-02-03 06:58:29.984001 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 06:58:29.984013 | orchestrator | Tuesday 03 February 2026 06:58:29 +0000 (0:00:01.489) 1:03:42.886 ****** 2026-02-03 06:58:29.984025 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:29.984038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd', 'dm-uuid-LVM-stKE3AAHbU7tUFxIQAJ72dtWy4EVot1jnVMQamLoChpHBSYL0cLNGgZFRZ56lw3T'], 'uuids': ['027247ae-00a3-443e-9633-8d8391a7da1a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:29.984057 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3', 'scsi-SQEMU_QEMU_HARDDISK_30942d1f-f704-43cc-bdd1-e5a5821a35c3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '30942d1f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:29.984083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Xh8ZTx-AObI-x7Qe-6Flc-GeSw-194p-Pfmv8i', 'scsi-0QEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f', 'scsi-SQEMU_QEMU_HARDDISK_b4cf4752-8315-482c-8a5b-0aee9859091f'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231448 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp', 'dm-uuid-CRYPT-LUKS2-51cdba44ba2f44e4a9ba680ba42622f2-Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231635 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85b6ff9c--bd3f--596f--9d81--0006b9d69e29-osd--block--85b6ff9c--bd3f--596f--9d81--0006b9d69e29', 'dm-uuid-LVM-eCnBPCzOsBAMg7ZG1zzxsebDLR9lBnAnVax7APxd4A5hvnIJK2L8WYuJjgErTdLp'], 'uuids': ['51cdba44-ba2f-44e4-a9ba-680ba42622f2'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b4cf4752', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Vax7AP-xd4A-5hvn-IJK2-L8WY-uJjg-ErTdLp']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MNylkH-UFIw-FcM9-RNy8-22Oh-QCDT-pfyDSJ', 'scsi-0QEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e', 'scsi-SQEMU_QEMU_HARDDISK_8097be92-44ca-4be8-a1da-39ba5887696e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8097be92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--bafb60f3--a5a9--526b--adce--8ea58a9a19cd-osd--block--bafb60f3--a5a9--526b--adce--8ea58a9a19cd']}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:58:31.231725 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26fa6d1d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1', 'scsi-SQEMU_QEMU_HARDDISK_26fa6d1d-7884-464f-aecb-162ac10d2371-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:59:02.973541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:59:02.973656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:59:02.973697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T', 'dm-uuid-CRYPT-LUKS2-027247ae00a3443e96338d8391a7da1a-nVMQam-LoCh-pHBS-YL0c-LNGg-ZFRZ-56lw3T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 06:59:02.973712 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.973725 | orchestrator | 2026-02-03 06:59:02.973737 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 06:59:02.973818 | orchestrator | Tuesday 03 February 2026 06:58:31 +0000 (0:00:01.523) 1:03:44.410 ****** 2026-02-03 06:59:02.973829 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:02.973842 | orchestrator | 2026-02-03 06:59:02.973853 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 06:59:02.973864 | orchestrator | Tuesday 03 February 2026 06:58:32 +0000 (0:00:01.567) 1:03:45.978 ****** 2026-02-03 06:59:02.973875 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:02.973886 | orchestrator | 2026-02-03 06:59:02.973897 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:59:02.973907 | orchestrator | Tuesday 03 February 2026 06:58:34 +0000 (0:00:01.268) 1:03:47.246 ****** 2026-02-03 06:59:02.973918 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:02.973929 | orchestrator | 2026-02-03 06:59:02.973940 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:59:02.973951 | orchestrator | Tuesday 03 February 2026 06:58:35 +0000 (0:00:01.547) 1:03:48.794 ****** 2026-02-03 06:59:02.973962 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.973973 | orchestrator | 2026-02-03 06:59:02.973984 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 06:59:02.974010 | orchestrator | Tuesday 03 February 2026 06:58:36 +0000 (0:00:01.192) 1:03:49.986 ****** 2026-02-03 06:59:02.974079 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.974092 | orchestrator | 2026-02-03 06:59:02.974106 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 06:59:02.974118 | orchestrator | Tuesday 03 February 2026 06:58:38 +0000 (0:00:01.312) 1:03:51.298 ****** 2026-02-03 06:59:02.974140 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.974153 | orchestrator | 2026-02-03 06:59:02.974166 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 06:59:02.974179 | orchestrator | Tuesday 03 February 2026 06:58:39 +0000 (0:00:01.221) 1:03:52.520 ****** 2026-02-03 06:59:02.974190 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-03 06:59:02.974202 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-03 06:59:02.974213 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-03 06:59:02.974223 | orchestrator | 2026-02-03 06:59:02.974235 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 06:59:02.974246 | orchestrator | Tuesday 03 February 2026 06:58:41 +0000 (0:00:02.378) 1:03:54.899 ****** 2026-02-03 06:59:02.974257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-03 06:59:02.974269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-03 06:59:02.974457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-03 06:59:02.974486 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.974497 | orchestrator | 2026-02-03 06:59:02.974508 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 06:59:02.974520 | orchestrator | Tuesday 03 February 2026 06:58:42 +0000 (0:00:01.247) 1:03:56.147 ****** 2026-02-03 06:59:02.974548 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-03 06:59:02.974560 | orchestrator | 2026-02-03 06:59:02.974572 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 06:59:02.974585 | orchestrator | Tuesday 03 February 2026 06:58:44 +0000 (0:00:01.208) 1:03:57.355 ****** 2026-02-03 06:59:02.974596 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.974607 | orchestrator | 2026-02-03 06:59:02.974618 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 06:59:02.974630 | orchestrator | Tuesday 03 February 2026 06:58:45 +0000 (0:00:01.224) 1:03:58.580 ****** 2026-02-03 06:59:02.974641 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.974652 | orchestrator | 2026-02-03 06:59:02.974664 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 06:59:02.974675 | orchestrator | Tuesday 03 February 2026 06:58:46 +0000 (0:00:01.207) 1:03:59.787 ****** 2026-02-03 06:59:02.974686 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.974697 | orchestrator | 2026-02-03 06:59:02.974708 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 06:59:02.974719 | orchestrator | Tuesday 03 February 2026 06:58:47 +0000 (0:00:01.227) 1:04:01.015 ****** 2026-02-03 06:59:02.974730 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:02.974765 | orchestrator | 2026-02-03 06:59:02.974777 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 06:59:02.974788 | orchestrator | Tuesday 03 February 2026 06:58:49 +0000 (0:00:01.304) 1:04:02.320 ****** 2026-02-03 06:59:02.974799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:59:02.974810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:59:02.974821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:59:02.974832 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.974843 | orchestrator | 2026-02-03 06:59:02.974854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 06:59:02.974864 | orchestrator | Tuesday 03 February 2026 06:58:50 +0000 (0:00:01.532) 1:04:03.852 ****** 2026-02-03 06:59:02.974876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:59:02.974886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:59:02.974897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:59:02.974908 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.974919 | orchestrator | 2026-02-03 06:59:02.974930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 06:59:02.974941 | orchestrator | Tuesday 03 February 2026 06:58:52 +0000 (0:00:01.530) 1:04:05.383 ****** 2026-02-03 06:59:02.974952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 06:59:02.974963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 06:59:02.974974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 06:59:02.974985 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:02.974996 | orchestrator | 2026-02-03 06:59:02.975007 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 06:59:02.975018 | orchestrator | Tuesday 03 February 2026 06:58:53 +0000 (0:00:01.546) 1:04:06.930 ****** 2026-02-03 06:59:02.975029 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:02.975040 | orchestrator | 2026-02-03 06:59:02.975051 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 06:59:02.975062 | orchestrator | Tuesday 03 February 2026 06:58:55 +0000 (0:00:01.267) 1:04:08.197 ****** 2026-02-03 06:59:02.975114 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 06:59:02.975127 | orchestrator | 2026-02-03 06:59:02.975138 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 06:59:02.975149 | orchestrator | Tuesday 03 February 2026 06:58:56 +0000 (0:00:01.963) 1:04:10.160 ****** 2026-02-03 06:59:02.975160 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:59:02.975171 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:59:02.975189 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:59:02.975201 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 06:59:02.975212 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:59:02.975223 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:59:02.975234 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:59:02.975245 | orchestrator | 2026-02-03 06:59:02.975256 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 06:59:02.975266 | orchestrator | Tuesday 03 February 2026 06:58:59 +0000 (0:00:02.699) 1:04:12.860 ****** 2026-02-03 06:59:02.975277 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 06:59:02.975288 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 06:59:02.975299 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 06:59:02.975310 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-03 06:59:02.975321 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 06:59:02.975332 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 06:59:02.975343 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 06:59:02.975354 | orchestrator | 2026-02-03 06:59:02.975373 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-03 06:59:59.887731 | orchestrator | Tuesday 03 February 2026 06:59:02 +0000 (0:00:03.275) 1:04:16.136 ****** 2026-02-03 06:59:59.887882 | orchestrator | changed: [testbed-node-3] 2026-02-03 06:59:59.887899 | orchestrator | 2026-02-03 06:59:59.887911 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-03 06:59:59.887923 | orchestrator | Tuesday 03 February 2026 06:59:05 +0000 (0:00:02.488) 1:04:18.625 ****** 2026-02-03 06:59:59.887935 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 06:59:59.887948 | orchestrator | 2026-02-03 06:59:59.887959 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-03 06:59:59.887970 | orchestrator | Tuesday 03 February 2026 06:59:09 +0000 (0:00:03.972) 1:04:22.597 ****** 2026-02-03 06:59:59.887981 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 06:59:59.887993 | orchestrator | 2026-02-03 06:59:59.888004 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 06:59:59.888015 | orchestrator | Tuesday 03 February 2026 06:59:11 +0000 (0:00:02.368) 1:04:24.966 ****** 2026-02-03 06:59:59.888026 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-03 06:59:59.888037 | orchestrator | 2026-02-03 06:59:59.888048 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 06:59:59.888059 | orchestrator | Tuesday 03 February 2026 06:59:12 +0000 (0:00:01.152) 1:04:26.118 ****** 2026-02-03 06:59:59.888071 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-03 06:59:59.888105 | orchestrator | 2026-02-03 06:59:59.888117 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 06:59:59.888128 | orchestrator | Tuesday 03 February 2026 06:59:14 +0000 (0:00:01.199) 1:04:27.318 ****** 2026-02-03 06:59:59.888139 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.888150 | orchestrator | 2026-02-03 06:59:59.888161 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 06:59:59.888172 | orchestrator | Tuesday 03 February 2026 06:59:15 +0000 (0:00:01.194) 1:04:28.512 ****** 2026-02-03 06:59:59.888198 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.888211 | orchestrator | 2026-02-03 06:59:59.888232 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 06:59:59.888243 | orchestrator | Tuesday 03 February 2026 06:59:16 +0000 (0:00:01.634) 1:04:30.147 ****** 2026-02-03 06:59:59.888256 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.888269 | orchestrator | 2026-02-03 06:59:59.888280 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 06:59:59.888293 | orchestrator | Tuesday 03 February 2026 06:59:18 +0000 (0:00:01.623) 1:04:31.770 ****** 2026-02-03 06:59:59.888304 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.888316 | orchestrator | 2026-02-03 06:59:59.888329 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 06:59:59.888341 | orchestrator | Tuesday 03 February 2026 06:59:20 +0000 (0:00:01.679) 1:04:33.450 ****** 2026-02-03 06:59:59.888354 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.888367 | orchestrator | 2026-02-03 06:59:59.888379 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 06:59:59.888392 | orchestrator | Tuesday 03 February 2026 06:59:21 +0000 (0:00:01.196) 1:04:34.646 ****** 2026-02-03 06:59:59.888404 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.888417 | orchestrator | 2026-02-03 06:59:59.888427 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 06:59:59.888438 | orchestrator | Tuesday 03 February 2026 06:59:22 +0000 (0:00:01.207) 1:04:35.854 ****** 2026-02-03 06:59:59.888449 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.888460 | orchestrator | 2026-02-03 06:59:59.888470 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 06:59:59.888481 | orchestrator | Tuesday 03 February 2026 06:59:23 +0000 (0:00:01.239) 1:04:37.093 ****** 2026-02-03 06:59:59.888492 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.888502 | orchestrator | 2026-02-03 06:59:59.888536 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 06:59:59.888558 | orchestrator | Tuesday 03 February 2026 06:59:25 +0000 (0:00:01.721) 1:04:38.815 ****** 2026-02-03 06:59:59.888575 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.888586 | orchestrator | 2026-02-03 06:59:59.888598 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 06:59:59.888609 | orchestrator | Tuesday 03 February 2026 06:59:27 +0000 (0:00:01.632) 1:04:40.447 ****** 2026-02-03 06:59:59.888620 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.888630 | orchestrator | 2026-02-03 06:59:59.888641 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 06:59:59.888652 | orchestrator | Tuesday 03 February 2026 06:59:28 +0000 (0:00:01.227) 1:04:41.675 ****** 2026-02-03 06:59:59.888663 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.888693 | orchestrator | 2026-02-03 06:59:59.888705 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 06:59:59.888715 | orchestrator | Tuesday 03 February 2026 06:59:29 +0000 (0:00:01.140) 1:04:42.816 ****** 2026-02-03 06:59:59.888726 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.888761 | orchestrator | 2026-02-03 06:59:59.888774 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 06:59:59.888785 | orchestrator | Tuesday 03 February 2026 06:59:30 +0000 (0:00:01.198) 1:04:44.014 ****** 2026-02-03 06:59:59.888807 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.888818 | orchestrator | 2026-02-03 06:59:59.888829 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 06:59:59.888840 | orchestrator | Tuesday 03 February 2026 06:59:32 +0000 (0:00:01.307) 1:04:45.321 ****** 2026-02-03 06:59:59.888851 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.888862 | orchestrator | 2026-02-03 06:59:59.888890 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 06:59:59.888902 | orchestrator | Tuesday 03 February 2026 06:59:33 +0000 (0:00:01.293) 1:04:46.615 ****** 2026-02-03 06:59:59.888913 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.888925 | orchestrator | 2026-02-03 06:59:59.888936 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 06:59:59.888946 | orchestrator | Tuesday 03 February 2026 06:59:34 +0000 (0:00:01.184) 1:04:47.800 ****** 2026-02-03 06:59:59.888957 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.888968 | orchestrator | 2026-02-03 06:59:59.888979 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 06:59:59.888990 | orchestrator | Tuesday 03 February 2026 06:59:35 +0000 (0:00:01.232) 1:04:49.033 ****** 2026-02-03 06:59:59.889001 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889012 | orchestrator | 2026-02-03 06:59:59.889022 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 06:59:59.889033 | orchestrator | Tuesday 03 February 2026 06:59:37 +0000 (0:00:01.325) 1:04:50.358 ****** 2026-02-03 06:59:59.889044 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.889055 | orchestrator | 2026-02-03 06:59:59.889066 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 06:59:59.889076 | orchestrator | Tuesday 03 February 2026 06:59:38 +0000 (0:00:01.190) 1:04:51.549 ****** 2026-02-03 06:59:59.889087 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.889098 | orchestrator | 2026-02-03 06:59:59.889109 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 06:59:59.889120 | orchestrator | Tuesday 03 February 2026 06:59:39 +0000 (0:00:01.189) 1:04:52.738 ****** 2026-02-03 06:59:59.889131 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889142 | orchestrator | 2026-02-03 06:59:59.889153 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 06:59:59.889163 | orchestrator | Tuesday 03 February 2026 06:59:40 +0000 (0:00:01.225) 1:04:53.964 ****** 2026-02-03 06:59:59.889174 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889185 | orchestrator | 2026-02-03 06:59:59.889196 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 06:59:59.889207 | orchestrator | Tuesday 03 February 2026 06:59:42 +0000 (0:00:01.296) 1:04:55.260 ****** 2026-02-03 06:59:59.889218 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889229 | orchestrator | 2026-02-03 06:59:59.889240 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 06:59:59.889250 | orchestrator | Tuesday 03 February 2026 06:59:43 +0000 (0:00:01.199) 1:04:56.460 ****** 2026-02-03 06:59:59.889261 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889272 | orchestrator | 2026-02-03 06:59:59.889283 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 06:59:59.889294 | orchestrator | Tuesday 03 February 2026 06:59:44 +0000 (0:00:01.194) 1:04:57.655 ****** 2026-02-03 06:59:59.889305 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889315 | orchestrator | 2026-02-03 06:59:59.889326 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 06:59:59.889337 | orchestrator | Tuesday 03 February 2026 06:59:45 +0000 (0:00:01.227) 1:04:58.883 ****** 2026-02-03 06:59:59.889348 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889359 | orchestrator | 2026-02-03 06:59:59.889369 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 06:59:59.889380 | orchestrator | Tuesday 03 February 2026 06:59:46 +0000 (0:00:01.165) 1:05:00.048 ****** 2026-02-03 06:59:59.889398 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889409 | orchestrator | 2026-02-03 06:59:59.889420 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 06:59:59.889432 | orchestrator | Tuesday 03 February 2026 06:59:48 +0000 (0:00:01.248) 1:05:01.297 ****** 2026-02-03 06:59:59.889443 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889454 | orchestrator | 2026-02-03 06:59:59.889465 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 06:59:59.889476 | orchestrator | Tuesday 03 February 2026 06:59:49 +0000 (0:00:01.138) 1:05:02.435 ****** 2026-02-03 06:59:59.889487 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889497 | orchestrator | 2026-02-03 06:59:59.889508 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 06:59:59.889519 | orchestrator | Tuesday 03 February 2026 06:59:50 +0000 (0:00:01.345) 1:05:03.781 ****** 2026-02-03 06:59:59.889536 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889547 | orchestrator | 2026-02-03 06:59:59.889558 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 06:59:59.889569 | orchestrator | Tuesday 03 February 2026 06:59:51 +0000 (0:00:01.296) 1:05:05.077 ****** 2026-02-03 06:59:59.889579 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889590 | orchestrator | 2026-02-03 06:59:59.889601 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 06:59:59.889612 | orchestrator | Tuesday 03 February 2026 06:59:53 +0000 (0:00:01.212) 1:05:06.290 ****** 2026-02-03 06:59:59.889623 | orchestrator | skipping: [testbed-node-3] 2026-02-03 06:59:59.889634 | orchestrator | 2026-02-03 06:59:59.889645 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 06:59:59.889656 | orchestrator | Tuesday 03 February 2026 06:59:54 +0000 (0:00:01.170) 1:05:07.461 ****** 2026-02-03 06:59:59.889667 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.889678 | orchestrator | 2026-02-03 06:59:59.889689 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 06:59:59.889700 | orchestrator | Tuesday 03 February 2026 06:59:56 +0000 (0:00:02.052) 1:05:09.513 ****** 2026-02-03 06:59:59.889711 | orchestrator | ok: [testbed-node-3] 2026-02-03 06:59:59.889721 | orchestrator | 2026-02-03 06:59:59.889732 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 06:59:59.889794 | orchestrator | Tuesday 03 February 2026 06:59:58 +0000 (0:00:02.289) 1:05:11.803 ****** 2026-02-03 06:59:59.889806 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-03 06:59:59.889817 | orchestrator | 2026-02-03 06:59:59.889828 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 06:59:59.889847 | orchestrator | Tuesday 03 February 2026 06:59:59 +0000 (0:00:01.257) 1:05:13.060 ****** 2026-02-03 07:00:49.796322 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.796435 | orchestrator | 2026-02-03 07:00:49.796452 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 07:00:49.796465 | orchestrator | Tuesday 03 February 2026 07:00:01 +0000 (0:00:01.283) 1:05:14.343 ****** 2026-02-03 07:00:49.796476 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.796488 | orchestrator | 2026-02-03 07:00:49.796499 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 07:00:49.796510 | orchestrator | Tuesday 03 February 2026 07:00:02 +0000 (0:00:01.253) 1:05:15.597 ****** 2026-02-03 07:00:49.796521 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 07:00:49.796532 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 07:00:49.796543 | orchestrator | 2026-02-03 07:00:49.796559 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 07:00:49.796579 | orchestrator | Tuesday 03 February 2026 07:00:04 +0000 (0:00:01.962) 1:05:17.559 ****** 2026-02-03 07:00:49.796597 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:00:49.796650 | orchestrator | 2026-02-03 07:00:49.796671 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 07:00:49.796691 | orchestrator | Tuesday 03 February 2026 07:00:05 +0000 (0:00:01.625) 1:05:19.185 ****** 2026-02-03 07:00:49.796710 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.796729 | orchestrator | 2026-02-03 07:00:49.796817 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 07:00:49.796835 | orchestrator | Tuesday 03 February 2026 07:00:07 +0000 (0:00:01.193) 1:05:20.378 ****** 2026-02-03 07:00:49.796852 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.796869 | orchestrator | 2026-02-03 07:00:49.796887 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 07:00:49.796905 | orchestrator | Tuesday 03 February 2026 07:00:08 +0000 (0:00:01.199) 1:05:21.578 ****** 2026-02-03 07:00:49.796922 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.796940 | orchestrator | 2026-02-03 07:00:49.796957 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 07:00:49.796975 | orchestrator | Tuesday 03 February 2026 07:00:09 +0000 (0:00:01.304) 1:05:22.882 ****** 2026-02-03 07:00:49.796993 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-03 07:00:49.797012 | orchestrator | 2026-02-03 07:00:49.797031 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 07:00:49.797049 | orchestrator | Tuesday 03 February 2026 07:00:10 +0000 (0:00:01.225) 1:05:24.108 ****** 2026-02-03 07:00:49.797066 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:00:49.797084 | orchestrator | 2026-02-03 07:00:49.797103 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 07:00:49.797122 | orchestrator | Tuesday 03 February 2026 07:00:12 +0000 (0:00:01.726) 1:05:25.834 ****** 2026-02-03 07:00:49.797140 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 07:00:49.797158 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 07:00:49.797176 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 07:00:49.797193 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.797211 | orchestrator | 2026-02-03 07:00:49.797229 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 07:00:49.797247 | orchestrator | Tuesday 03 February 2026 07:00:13 +0000 (0:00:01.199) 1:05:27.034 ****** 2026-02-03 07:00:49.797265 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.797283 | orchestrator | 2026-02-03 07:00:49.797301 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 07:00:49.797319 | orchestrator | Tuesday 03 February 2026 07:00:14 +0000 (0:00:01.141) 1:05:28.176 ****** 2026-02-03 07:00:49.797336 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.797354 | orchestrator | 2026-02-03 07:00:49.797371 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 07:00:49.797407 | orchestrator | Tuesday 03 February 2026 07:00:16 +0000 (0:00:01.305) 1:05:29.482 ****** 2026-02-03 07:00:49.797424 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.797441 | orchestrator | 2026-02-03 07:00:49.797458 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 07:00:49.797475 | orchestrator | Tuesday 03 February 2026 07:00:17 +0000 (0:00:01.671) 1:05:31.154 ****** 2026-02-03 07:00:49.797491 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.797508 | orchestrator | 2026-02-03 07:00:49.797525 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 07:00:49.797542 | orchestrator | Tuesday 03 February 2026 07:00:19 +0000 (0:00:01.213) 1:05:32.367 ****** 2026-02-03 07:00:49.797559 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.797575 | orchestrator | 2026-02-03 07:00:49.797591 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 07:00:49.797608 | orchestrator | Tuesday 03 February 2026 07:00:20 +0000 (0:00:01.220) 1:05:33.588 ****** 2026-02-03 07:00:49.797645 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:00:49.797662 | orchestrator | 2026-02-03 07:00:49.797679 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 07:00:49.797696 | orchestrator | Tuesday 03 February 2026 07:00:22 +0000 (0:00:02.555) 1:05:36.143 ****** 2026-02-03 07:00:49.797713 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:00:49.797755 | orchestrator | 2026-02-03 07:00:49.797777 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 07:00:49.797795 | orchestrator | Tuesday 03 February 2026 07:00:24 +0000 (0:00:01.264) 1:05:37.408 ****** 2026-02-03 07:00:49.797813 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-03 07:00:49.797830 | orchestrator | 2026-02-03 07:00:49.797848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 07:00:49.797896 | orchestrator | Tuesday 03 February 2026 07:00:25 +0000 (0:00:01.369) 1:05:38.778 ****** 2026-02-03 07:00:49.797917 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.797934 | orchestrator | 2026-02-03 07:00:49.797953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 07:00:49.797971 | orchestrator | Tuesday 03 February 2026 07:00:26 +0000 (0:00:01.234) 1:05:40.012 ****** 2026-02-03 07:00:49.797989 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.798009 | orchestrator | 2026-02-03 07:00:49.798157 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 07:00:49.798172 | orchestrator | Tuesday 03 February 2026 07:00:28 +0000 (0:00:01.349) 1:05:41.362 ****** 2026-02-03 07:00:49.798184 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.798195 | orchestrator | 2026-02-03 07:00:49.798205 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 07:00:49.798217 | orchestrator | Tuesday 03 February 2026 07:00:29 +0000 (0:00:01.284) 1:05:42.646 ****** 2026-02-03 07:00:49.798227 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.798238 | orchestrator | 2026-02-03 07:00:49.798249 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 07:00:49.798260 | orchestrator | Tuesday 03 February 2026 07:00:30 +0000 (0:00:01.223) 1:05:43.869 ****** 2026-02-03 07:00:49.798271 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.798281 | orchestrator | 2026-02-03 07:00:49.798292 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 07:00:49.798303 | orchestrator | Tuesday 03 February 2026 07:00:31 +0000 (0:00:01.276) 1:05:45.146 ****** 2026-02-03 07:00:49.798314 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.798325 | orchestrator | 2026-02-03 07:00:49.798336 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 07:00:49.798347 | orchestrator | Tuesday 03 February 2026 07:00:33 +0000 (0:00:01.190) 1:05:46.337 ****** 2026-02-03 07:00:49.798358 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.798368 | orchestrator | 2026-02-03 07:00:49.798379 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 07:00:49.798390 | orchestrator | Tuesday 03 February 2026 07:00:34 +0000 (0:00:01.168) 1:05:47.505 ****** 2026-02-03 07:00:49.798401 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:00:49.798412 | orchestrator | 2026-02-03 07:00:49.798422 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 07:00:49.798463 | orchestrator | Tuesday 03 February 2026 07:00:35 +0000 (0:00:01.275) 1:05:48.781 ****** 2026-02-03 07:00:49.798477 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:00:49.798488 | orchestrator | 2026-02-03 07:00:49.798498 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 07:00:49.798509 | orchestrator | Tuesday 03 February 2026 07:00:36 +0000 (0:00:01.219) 1:05:50.001 ****** 2026-02-03 07:00:49.798520 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-03 07:00:49.798532 | orchestrator | 2026-02-03 07:00:49.798543 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 07:00:49.798566 | orchestrator | Tuesday 03 February 2026 07:00:38 +0000 (0:00:01.228) 1:05:51.229 ****** 2026-02-03 07:00:49.798577 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-03 07:00:49.798588 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-03 07:00:49.798600 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-03 07:00:49.798610 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-03 07:00:49.798621 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-03 07:00:49.798632 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-03 07:00:49.798643 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-03 07:00:49.798654 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-03 07:00:49.798665 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 07:00:49.798676 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 07:00:49.798687 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 07:00:49.798698 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 07:00:49.798718 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 07:00:49.798729 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 07:00:49.798770 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-03 07:00:49.798781 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-03 07:00:49.798792 | orchestrator | 2026-02-03 07:00:49.798803 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 07:00:49.798814 | orchestrator | Tuesday 03 February 2026 07:00:44 +0000 (0:00:06.857) 1:05:58.086 ****** 2026-02-03 07:00:49.798825 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-03 07:00:49.798836 | orchestrator | 2026-02-03 07:00:49.798846 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-03 07:00:49.798857 | orchestrator | Tuesday 03 February 2026 07:00:46 +0000 (0:00:01.270) 1:05:59.356 ****** 2026-02-03 07:00:49.798868 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 07:00:49.798881 | orchestrator | 2026-02-03 07:00:49.798892 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-03 07:00:49.798903 | orchestrator | Tuesday 03 February 2026 07:00:47 +0000 (0:00:01.553) 1:06:00.909 ****** 2026-02-03 07:00:49.798914 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 07:00:49.798925 | orchestrator | 2026-02-03 07:00:49.798936 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 07:00:49.798959 | orchestrator | Tuesday 03 February 2026 07:00:49 +0000 (0:00:02.059) 1:06:02.969 ****** 2026-02-03 07:01:43.535605 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.535752 | orchestrator | 2026-02-03 07:01:43.535772 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 07:01:43.535788 | orchestrator | Tuesday 03 February 2026 07:00:51 +0000 (0:00:01.293) 1:06:04.262 ****** 2026-02-03 07:01:43.535800 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.535811 | orchestrator | 2026-02-03 07:01:43.535822 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 07:01:43.535834 | orchestrator | Tuesday 03 February 2026 07:00:52 +0000 (0:00:01.183) 1:06:05.446 ****** 2026-02-03 07:01:43.535845 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.535856 | orchestrator | 2026-02-03 07:01:43.535867 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 07:01:43.535878 | orchestrator | Tuesday 03 February 2026 07:00:53 +0000 (0:00:01.208) 1:06:06.654 ****** 2026-02-03 07:01:43.535889 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.535924 | orchestrator | 2026-02-03 07:01:43.535936 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 07:01:43.535947 | orchestrator | Tuesday 03 February 2026 07:00:54 +0000 (0:00:01.245) 1:06:07.900 ****** 2026-02-03 07:01:43.535958 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.535969 | orchestrator | 2026-02-03 07:01:43.535980 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 07:01:43.535992 | orchestrator | Tuesday 03 February 2026 07:00:55 +0000 (0:00:01.227) 1:06:09.127 ****** 2026-02-03 07:01:43.536003 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536013 | orchestrator | 2026-02-03 07:01:43.536024 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 07:01:43.536035 | orchestrator | Tuesday 03 February 2026 07:00:57 +0000 (0:00:01.204) 1:06:10.332 ****** 2026-02-03 07:01:43.536046 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536057 | orchestrator | 2026-02-03 07:01:43.536068 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 07:01:43.536079 | orchestrator | Tuesday 03 February 2026 07:00:58 +0000 (0:00:01.173) 1:06:11.506 ****** 2026-02-03 07:01:43.536090 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536101 | orchestrator | 2026-02-03 07:01:43.536111 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 07:01:43.536123 | orchestrator | Tuesday 03 February 2026 07:00:59 +0000 (0:00:01.171) 1:06:12.677 ****** 2026-02-03 07:01:43.536136 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536149 | orchestrator | 2026-02-03 07:01:43.536163 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 07:01:43.536176 | orchestrator | Tuesday 03 February 2026 07:01:00 +0000 (0:00:01.281) 1:06:13.958 ****** 2026-02-03 07:01:43.536188 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536201 | orchestrator | 2026-02-03 07:01:43.536214 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 07:01:43.536226 | orchestrator | Tuesday 03 February 2026 07:01:01 +0000 (0:00:01.208) 1:06:15.167 ****** 2026-02-03 07:01:43.536239 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536252 | orchestrator | 2026-02-03 07:01:43.536264 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 07:01:43.536278 | orchestrator | Tuesday 03 February 2026 07:01:03 +0000 (0:00:01.265) 1:06:16.432 ****** 2026-02-03 07:01:43.536290 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-03 07:01:43.536303 | orchestrator | 2026-02-03 07:01:43.536314 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 07:01:43.536325 | orchestrator | Tuesday 03 February 2026 07:01:07 +0000 (0:00:04.503) 1:06:20.935 ****** 2026-02-03 07:01:43.536336 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 07:01:43.536348 | orchestrator | 2026-02-03 07:01:43.536359 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 07:01:43.536385 | orchestrator | Tuesday 03 February 2026 07:01:09 +0000 (0:00:01.449) 1:06:22.385 ****** 2026-02-03 07:01:43.536399 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-03 07:01:43.536413 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-03 07:01:43.536436 | orchestrator | 2026-02-03 07:01:43.536448 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 07:01:43.536459 | orchestrator | Tuesday 03 February 2026 07:01:14 +0000 (0:00:05.160) 1:06:27.545 ****** 2026-02-03 07:01:43.536470 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536481 | orchestrator | 2026-02-03 07:01:43.536492 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 07:01:43.536503 | orchestrator | Tuesday 03 February 2026 07:01:15 +0000 (0:00:01.157) 1:06:28.703 ****** 2026-02-03 07:01:43.536513 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536524 | orchestrator | 2026-02-03 07:01:43.536535 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 07:01:43.536563 | orchestrator | Tuesday 03 February 2026 07:01:16 +0000 (0:00:01.170) 1:06:29.873 ****** 2026-02-03 07:01:43.536575 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536586 | orchestrator | 2026-02-03 07:01:43.536597 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 07:01:43.536608 | orchestrator | Tuesday 03 February 2026 07:01:17 +0000 (0:00:01.207) 1:06:31.080 ****** 2026-02-03 07:01:43.536621 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536639 | orchestrator | 2026-02-03 07:01:43.536658 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 07:01:43.536686 | orchestrator | Tuesday 03 February 2026 07:01:19 +0000 (0:00:01.183) 1:06:32.264 ****** 2026-02-03 07:01:43.536705 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536723 | orchestrator | 2026-02-03 07:01:43.536810 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 07:01:43.536828 | orchestrator | Tuesday 03 February 2026 07:01:20 +0000 (0:00:01.178) 1:06:33.442 ****** 2026-02-03 07:01:43.536846 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:01:43.536866 | orchestrator | 2026-02-03 07:01:43.536883 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 07:01:43.536900 | orchestrator | Tuesday 03 February 2026 07:01:21 +0000 (0:00:01.309) 1:06:34.751 ****** 2026-02-03 07:01:43.536920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 07:01:43.536939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 07:01:43.536958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 07:01:43.536970 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.536981 | orchestrator | 2026-02-03 07:01:43.536992 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 07:01:43.537003 | orchestrator | Tuesday 03 February 2026 07:01:23 +0000 (0:00:01.944) 1:06:36.696 ****** 2026-02-03 07:01:43.537014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 07:01:43.537025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 07:01:43.537036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 07:01:43.537047 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.537057 | orchestrator | 2026-02-03 07:01:43.537068 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 07:01:43.537079 | orchestrator | Tuesday 03 February 2026 07:01:25 +0000 (0:00:02.143) 1:06:38.839 ****** 2026-02-03 07:01:43.537089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-03 07:01:43.537100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-03 07:01:43.537111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-03 07:01:43.537122 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.537132 | orchestrator | 2026-02-03 07:01:43.537143 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 07:01:43.537154 | orchestrator | Tuesday 03 February 2026 07:01:27 +0000 (0:00:01.503) 1:06:40.343 ****** 2026-02-03 07:01:43.537165 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:01:43.537176 | orchestrator | 2026-02-03 07:01:43.537198 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 07:01:43.537208 | orchestrator | Tuesday 03 February 2026 07:01:28 +0000 (0:00:01.201) 1:06:41.545 ****** 2026-02-03 07:01:43.537219 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-03 07:01:43.537230 | orchestrator | 2026-02-03 07:01:43.537241 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 07:01:43.537252 | orchestrator | Tuesday 03 February 2026 07:01:29 +0000 (0:00:01.405) 1:06:42.951 ****** 2026-02-03 07:01:43.537263 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:01:43.537273 | orchestrator | 2026-02-03 07:01:43.537284 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-03 07:01:43.537295 | orchestrator | Tuesday 03 February 2026 07:01:31 +0000 (0:00:01.909) 1:06:44.861 ****** 2026-02-03 07:01:43.537306 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-03 07:01:43.537317 | orchestrator | 2026-02-03 07:01:43.537327 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-03 07:01:43.537346 | orchestrator | Tuesday 03 February 2026 07:01:33 +0000 (0:00:01.531) 1:06:46.392 ****** 2026-02-03 07:01:43.537358 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 07:01:43.537369 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 07:01:43.537380 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 07:01:43.537390 | orchestrator | 2026-02-03 07:01:43.537401 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-03 07:01:43.537412 | orchestrator | Tuesday 03 February 2026 07:01:36 +0000 (0:00:03.418) 1:06:49.811 ****** 2026-02-03 07:01:43.537423 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-03 07:01:43.537434 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-03 07:01:43.537444 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:01:43.537455 | orchestrator | 2026-02-03 07:01:43.537466 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-03 07:01:43.537477 | orchestrator | Tuesday 03 February 2026 07:01:38 +0000 (0:00:01.999) 1:06:51.811 ****** 2026-02-03 07:01:43.537487 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:01:43.537498 | orchestrator | 2026-02-03 07:01:43.537509 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-03 07:01:43.537519 | orchestrator | Tuesday 03 February 2026 07:01:39 +0000 (0:00:01.163) 1:06:52.974 ****** 2026-02-03 07:01:43.537530 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-03 07:01:43.537542 | orchestrator | 2026-02-03 07:01:43.537553 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-03 07:01:43.537563 | orchestrator | Tuesday 03 February 2026 07:01:41 +0000 (0:00:01.543) 1:06:54.518 ****** 2026-02-03 07:01:43.537585 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 07:03:00.721556 | orchestrator | 2026-02-03 07:03:00.721670 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-03 07:03:00.721687 | orchestrator | Tuesday 03 February 2026 07:01:43 +0000 (0:00:02.191) 1:06:56.710 ****** 2026-02-03 07:03:00.721699 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 07:03:00.721713 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-03 07:03:00.721806 | orchestrator | 2026-02-03 07:03:00.721826 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-03 07:03:00.721838 | orchestrator | Tuesday 03 February 2026 07:01:48 +0000 (0:00:05.463) 1:07:02.174 ****** 2026-02-03 07:03:00.721850 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 07:03:00.721862 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 07:03:00.721913 | orchestrator | 2026-02-03 07:03:00.721925 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-03 07:03:00.721936 | orchestrator | Tuesday 03 February 2026 07:01:52 +0000 (0:00:03.297) 1:07:05.471 ****** 2026-02-03 07:03:00.721947 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-03 07:03:00.721958 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:03:00.721970 | orchestrator | 2026-02-03 07:03:00.721981 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-03 07:03:00.721993 | orchestrator | Tuesday 03 February 2026 07:01:54 +0000 (0:00:02.164) 1:07:07.635 ****** 2026-02-03 07:03:00.722004 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-03 07:03:00.722070 | orchestrator | 2026-02-03 07:03:00.722086 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-03 07:03:00.722099 | orchestrator | Tuesday 03 February 2026 07:01:56 +0000 (0:00:01.621) 1:07:09.257 ****** 2026-02-03 07:03:00.722113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722181 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:03:00.722194 | orchestrator | 2026-02-03 07:03:00.722207 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-03 07:03:00.722219 | orchestrator | Tuesday 03 February 2026 07:01:57 +0000 (0:00:01.704) 1:07:10.962 ****** 2026-02-03 07:03:00.722232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:03:00.722314 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:03:00.722327 | orchestrator | 2026-02-03 07:03:00.722339 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-03 07:03:00.722352 | orchestrator | Tuesday 03 February 2026 07:01:59 +0000 (0:00:01.702) 1:07:12.665 ****** 2026-02-03 07:03:00.722366 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:03:00.722381 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:03:00.722395 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:03:00.722407 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:03:00.722419 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:03:00.722440 | orchestrator | 2026-02-03 07:03:00.722451 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-03 07:03:00.722482 | orchestrator | Tuesday 03 February 2026 07:02:31 +0000 (0:00:32.455) 1:07:45.120 ****** 2026-02-03 07:03:00.722494 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:03:00.722506 | orchestrator | 2026-02-03 07:03:00.722517 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-03 07:03:00.722528 | orchestrator | Tuesday 03 February 2026 07:02:33 +0000 (0:00:01.199) 1:07:46.320 ****** 2026-02-03 07:03:00.722540 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:03:00.722551 | orchestrator | 2026-02-03 07:03:00.722562 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-03 07:03:00.722573 | orchestrator | Tuesday 03 February 2026 07:02:34 +0000 (0:00:01.165) 1:07:47.486 ****** 2026-02-03 07:03:00.722584 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-03 07:03:00.722595 | orchestrator | 2026-02-03 07:03:00.722606 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-03 07:03:00.722618 | orchestrator | Tuesday 03 February 2026 07:02:35 +0000 (0:00:01.568) 1:07:49.055 ****** 2026-02-03 07:03:00.722629 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-03 07:03:00.722640 | orchestrator | 2026-02-03 07:03:00.722651 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-03 07:03:00.722662 | orchestrator | Tuesday 03 February 2026 07:02:37 +0000 (0:00:01.740) 1:07:50.796 ****** 2026-02-03 07:03:00.722674 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:03:00.722685 | orchestrator | 2026-02-03 07:03:00.722696 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-03 07:03:00.722707 | orchestrator | Tuesday 03 February 2026 07:02:39 +0000 (0:00:02.126) 1:07:52.922 ****** 2026-02-03 07:03:00.722748 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:03:00.722762 | orchestrator | 2026-02-03 07:03:00.722773 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-03 07:03:00.722784 | orchestrator | Tuesday 03 February 2026 07:02:41 +0000 (0:00:01.976) 1:07:54.899 ****** 2026-02-03 07:03:00.722795 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:03:00.722806 | orchestrator | 2026-02-03 07:03:00.722817 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-03 07:03:00.722828 | orchestrator | Tuesday 03 February 2026 07:02:44 +0000 (0:00:02.410) 1:07:57.309 ****** 2026-02-03 07:03:00.722839 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-03 07:03:00.722851 | orchestrator | 2026-02-03 07:03:00.722862 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-03 07:03:00.722872 | orchestrator | 2026-02-03 07:03:00.722883 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 07:03:00.722894 | orchestrator | Tuesday 03 February 2026 07:02:47 +0000 (0:00:02.982) 1:08:00.292 ****** 2026-02-03 07:03:00.722906 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-03 07:03:00.722917 | orchestrator | 2026-02-03 07:03:00.722928 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 07:03:00.722939 | orchestrator | Tuesday 03 February 2026 07:02:48 +0000 (0:00:01.316) 1:08:01.609 ****** 2026-02-03 07:03:00.722950 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:00.722961 | orchestrator | 2026-02-03 07:03:00.722972 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 07:03:00.722983 | orchestrator | Tuesday 03 February 2026 07:02:49 +0000 (0:00:01.473) 1:08:03.083 ****** 2026-02-03 07:03:00.722994 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:00.723005 | orchestrator | 2026-02-03 07:03:00.723016 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 07:03:00.723034 | orchestrator | Tuesday 03 February 2026 07:02:51 +0000 (0:00:01.187) 1:08:04.271 ****** 2026-02-03 07:03:00.723046 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:00.723056 | orchestrator | 2026-02-03 07:03:00.723068 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 07:03:00.723079 | orchestrator | Tuesday 03 February 2026 07:02:52 +0000 (0:00:01.567) 1:08:05.838 ****** 2026-02-03 07:03:00.723090 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:00.723101 | orchestrator | 2026-02-03 07:03:00.723117 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 07:03:00.723128 | orchestrator | Tuesday 03 February 2026 07:02:53 +0000 (0:00:01.220) 1:08:07.059 ****** 2026-02-03 07:03:00.723140 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:00.723151 | orchestrator | 2026-02-03 07:03:00.723162 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 07:03:00.723173 | orchestrator | Tuesday 03 February 2026 07:02:55 +0000 (0:00:01.258) 1:08:08.317 ****** 2026-02-03 07:03:00.723184 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:00.723195 | orchestrator | 2026-02-03 07:03:00.723206 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 07:03:00.723217 | orchestrator | Tuesday 03 February 2026 07:02:56 +0000 (0:00:01.245) 1:08:09.562 ****** 2026-02-03 07:03:00.723228 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:00.723239 | orchestrator | 2026-02-03 07:03:00.723250 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 07:03:00.723261 | orchestrator | Tuesday 03 February 2026 07:02:57 +0000 (0:00:01.244) 1:08:10.807 ****** 2026-02-03 07:03:00.723273 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:00.723284 | orchestrator | 2026-02-03 07:03:00.723295 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 07:03:00.723306 | orchestrator | Tuesday 03 February 2026 07:02:58 +0000 (0:00:01.192) 1:08:11.999 ****** 2026-02-03 07:03:00.723317 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 07:03:00.723328 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 07:03:00.723339 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 07:03:00.723350 | orchestrator | 2026-02-03 07:03:00.723361 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 07:03:00.723379 | orchestrator | Tuesday 03 February 2026 07:03:00 +0000 (0:00:01.896) 1:08:13.896 ****** 2026-02-03 07:03:29.156472 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:29.156633 | orchestrator | 2026-02-03 07:03:29.156654 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 07:03:29.156668 | orchestrator | Tuesday 03 February 2026 07:03:02 +0000 (0:00:01.443) 1:08:15.339 ****** 2026-02-03 07:03:29.156680 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 07:03:29.156692 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 07:03:29.156703 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 07:03:29.156714 | orchestrator | 2026-02-03 07:03:29.156802 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 07:03:29.156823 | orchestrator | Tuesday 03 February 2026 07:03:05 +0000 (0:00:03.685) 1:08:19.024 ****** 2026-02-03 07:03:29.156840 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 07:03:29.156858 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 07:03:29.156875 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 07:03:29.156892 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:29.156909 | orchestrator | 2026-02-03 07:03:29.156927 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 07:03:29.156942 | orchestrator | Tuesday 03 February 2026 07:03:07 +0000 (0:00:01.755) 1:08:20.780 ****** 2026-02-03 07:03:29.156995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 07:03:29.157017 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 07:03:29.157038 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 07:03:29.157056 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:29.157075 | orchestrator | 2026-02-03 07:03:29.157092 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 07:03:29.157110 | orchestrator | Tuesday 03 February 2026 07:03:09 +0000 (0:00:02.114) 1:08:22.895 ****** 2026-02-03 07:03:29.157132 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:29.157179 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:29.157199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:29.157214 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:29.157227 | orchestrator | 2026-02-03 07:03:29.157240 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 07:03:29.157253 | orchestrator | Tuesday 03 February 2026 07:03:10 +0000 (0:00:01.213) 1:08:24.108 ****** 2026-02-03 07:03:29.157291 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 07:03:02.794771', 'end': '2026-02-03 07:03:02.846162', 'delta': '0:00:00.051391', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 07:03:29.157308 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 07:03:03.424382', 'end': '2026-02-03 07:03:03.476665', 'delta': '0:00:00.052283', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 07:03:29.157334 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 07:03:04.470683', 'end': '2026-02-03 07:03:04.541184', 'delta': '0:00:00.070501', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 07:03:29.157346 | orchestrator | 2026-02-03 07:03:29.157358 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 07:03:29.157369 | orchestrator | Tuesday 03 February 2026 07:03:12 +0000 (0:00:01.278) 1:08:25.387 ****** 2026-02-03 07:03:29.157380 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:29.157391 | orchestrator | 2026-02-03 07:03:29.157402 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 07:03:29.157413 | orchestrator | Tuesday 03 February 2026 07:03:13 +0000 (0:00:01.764) 1:08:27.152 ****** 2026-02-03 07:03:29.157424 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:29.157435 | orchestrator | 2026-02-03 07:03:29.157446 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 07:03:29.157457 | orchestrator | Tuesday 03 February 2026 07:03:15 +0000 (0:00:01.912) 1:08:29.065 ****** 2026-02-03 07:03:29.157468 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:29.157479 | orchestrator | 2026-02-03 07:03:29.157490 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 07:03:29.157501 | orchestrator | Tuesday 03 February 2026 07:03:17 +0000 (0:00:01.340) 1:08:30.406 ****** 2026-02-03 07:03:29.157513 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-03 07:03:29.157523 | orchestrator | 2026-02-03 07:03:29.157535 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 07:03:29.157546 | orchestrator | Tuesday 03 February 2026 07:03:19 +0000 (0:00:02.101) 1:08:32.508 ****** 2026-02-03 07:03:29.157556 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:29.157567 | orchestrator | 2026-02-03 07:03:29.157578 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 07:03:29.157595 | orchestrator | Tuesday 03 February 2026 07:03:20 +0000 (0:00:01.205) 1:08:33.714 ****** 2026-02-03 07:03:29.157606 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:29.157617 | orchestrator | 2026-02-03 07:03:29.157628 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 07:03:29.157639 | orchestrator | Tuesday 03 February 2026 07:03:21 +0000 (0:00:01.134) 1:08:34.848 ****** 2026-02-03 07:03:29.157650 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:29.157661 | orchestrator | 2026-02-03 07:03:29.157672 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 07:03:29.157682 | orchestrator | Tuesday 03 February 2026 07:03:23 +0000 (0:00:01.340) 1:08:36.188 ****** 2026-02-03 07:03:29.157693 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:29.157704 | orchestrator | 2026-02-03 07:03:29.157741 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 07:03:29.157755 | orchestrator | Tuesday 03 February 2026 07:03:24 +0000 (0:00:01.154) 1:08:37.343 ****** 2026-02-03 07:03:29.157766 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:29.157776 | orchestrator | 2026-02-03 07:03:29.157787 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 07:03:29.157805 | orchestrator | Tuesday 03 February 2026 07:03:25 +0000 (0:00:01.291) 1:08:38.634 ****** 2026-02-03 07:03:29.157817 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:29.157827 | orchestrator | 2026-02-03 07:03:29.157838 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 07:03:29.157850 | orchestrator | Tuesday 03 February 2026 07:03:26 +0000 (0:00:01.285) 1:08:39.919 ****** 2026-02-03 07:03:29.157861 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:29.157872 | orchestrator | 2026-02-03 07:03:29.157883 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 07:03:29.157894 | orchestrator | Tuesday 03 February 2026 07:03:27 +0000 (0:00:01.171) 1:08:41.091 ****** 2026-02-03 07:03:29.157905 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:29.157915 | orchestrator | 2026-02-03 07:03:29.157926 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 07:03:29.157944 | orchestrator | Tuesday 03 February 2026 07:03:29 +0000 (0:00:01.237) 1:08:42.328 ****** 2026-02-03 07:03:31.812696 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:31.812870 | orchestrator | 2026-02-03 07:03:31.812891 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 07:03:31.812904 | orchestrator | Tuesday 03 February 2026 07:03:30 +0000 (0:00:01.180) 1:08:43.509 ****** 2026-02-03 07:03:31.812918 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:31.812930 | orchestrator | 2026-02-03 07:03:31.812941 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 07:03:31.812953 | orchestrator | Tuesday 03 February 2026 07:03:31 +0000 (0:00:01.211) 1:08:44.720 ****** 2026-02-03 07:03:31.812967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:03:31.812984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'uuids': ['ee84a40a-c8f5-4363-8b92-865eb14b3049'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt']}})  2026-02-03 07:03:31.812999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15b94581', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 07:03:31.813030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362']}})  2026-02-03 07:03:31.813062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:03:31.813075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:03:31.813106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 07:03:31.813119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:03:31.813132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun', 'dm-uuid-CRYPT-LUKS2-1805b057808e47489bd25959cb85c8e5-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 07:03:31.813143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:03:31.813155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'uuids': ['1805b057-808e-4748-9bd2-5959cb85c8e5'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun']}})  2026-02-03 07:03:31.813180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291']}})  2026-02-03 07:03:31.813192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:03:31.813217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9ac79520', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 07:03:33.364475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:03:33.364564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:03:33.364611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt', 'dm-uuid-CRYPT-LUKS2-ee84a40ac8f543638b92865eb14b3049-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 07:03:33.364624 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:33.364634 | orchestrator | 2026-02-03 07:03:33.364643 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 07:03:33.364652 | orchestrator | Tuesday 03 February 2026 07:03:33 +0000 (0:00:01.574) 1:08:46.295 ****** 2026-02-03 07:03:33.364661 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:33.364671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291', 'dm-uuid-LVM-F6tlR8rX28mHBuGZmIB9CPxCef1PwVO1F69HDz3pfwyuxUfx8QlY6u3q4wNOYZvt'], 'uuids': ['ee84a40a-c8f5-4363-8b92-865eb14b3049'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt']}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:33.364681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be', 'scsi-SQEMU_QEMU_HARDDISK_15b94581-7087-40af-83f2-cd9970e768be'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15b94581', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:33.364706 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OIAfSx-9FrO-G71T-2YtW-9cXZ-u9sv-iVlruI', 'scsi-0QEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a', 'scsi-SQEMU_QEMU_HARDDISK_6b074c22-654d-40e5-9251-7e10d9fad41a'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362']}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:33.364783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:33.364794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:33.364808 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:33.364823 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:33.364846 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun', 'dm-uuid-CRYPT-LUKS2-1805b057808e47489bd25959cb85c8e5-0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:38.985221 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:38.985367 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--121565c5--01e5--5794--959e--075d91e35362-osd--block--121565c5--01e5--5794--959e--075d91e35362', 'dm-uuid-LVM-JxrjzObQ9uufb9OS44FMciQneXibANhw0SrgRPhb81g1cZ8CRqdeozHyruPhRzun'], 'uuids': ['1805b057-808e-4748-9bd2-5959cb85c8e5'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6b074c22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0SrgRP-hb81-g1cZ-8CRq-deoz-Hyru-PhRzun']}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:38.985386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QlIL1O-6aa2-xc1n-eTaR-0yU7-qpeR-rfKE1n', 'scsi-0QEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd', 'scsi-SQEMU_QEMU_HARDDISK_f58f055b-eadc-4fe1-a72e-2d1917f1f2dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f58f055b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1a37b12a--042e--589b--8d7d--13944ef33291-osd--block--1a37b12a--042e--589b--8d7d--13944ef33291']}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:38.985401 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:38.985443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9ac79520', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ac79520-7901-4e67-81d0-fc013cb298e8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:38.985467 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:38.985479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:38.985491 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt', 'dm-uuid-CRYPT-LUKS2-ee84a40ac8f543638b92865eb14b3049-F69HDz-3pfw-yuxU-fx8Q-lY6u-3q4w-NOYZvt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:03:38.985504 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:03:38.985518 | orchestrator | 2026-02-03 07:03:38.985530 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 07:03:38.985542 | orchestrator | Tuesday 03 February 2026 07:03:34 +0000 (0:00:01.517) 1:08:47.812 ****** 2026-02-03 07:03:38.985553 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:38.985564 | orchestrator | 2026-02-03 07:03:38.985575 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 07:03:38.985586 | orchestrator | Tuesday 03 February 2026 07:03:36 +0000 (0:00:01.609) 1:08:49.422 ****** 2026-02-03 07:03:38.985603 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:38.985614 | orchestrator | 2026-02-03 07:03:38.985625 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 07:03:38.985636 | orchestrator | Tuesday 03 February 2026 07:03:37 +0000 (0:00:01.231) 1:08:50.653 ****** 2026-02-03 07:03:38.985647 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:03:38.985658 | orchestrator | 2026-02-03 07:03:38.985669 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 07:03:38.985686 | orchestrator | Tuesday 03 February 2026 07:03:38 +0000 (0:00:01.506) 1:08:52.160 ****** 2026-02-03 07:04:23.460424 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.460560 | orchestrator | 2026-02-03 07:04:23.460590 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 07:04:23.460611 | orchestrator | Tuesday 03 February 2026 07:03:40 +0000 (0:00:01.153) 1:08:53.313 ****** 2026-02-03 07:04:23.460627 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.460645 | orchestrator | 2026-02-03 07:04:23.460663 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 07:04:23.460681 | orchestrator | Tuesday 03 February 2026 07:03:41 +0000 (0:00:01.250) 1:08:54.564 ****** 2026-02-03 07:04:23.460699 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.460798 | orchestrator | 2026-02-03 07:04:23.460820 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 07:04:23.460839 | orchestrator | Tuesday 03 February 2026 07:03:42 +0000 (0:00:01.163) 1:08:55.728 ****** 2026-02-03 07:04:23.460860 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-03 07:04:23.460879 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-03 07:04:23.460896 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-03 07:04:23.460915 | orchestrator | 2026-02-03 07:04:23.460957 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 07:04:23.460977 | orchestrator | Tuesday 03 February 2026 07:03:44 +0000 (0:00:02.175) 1:08:57.903 ****** 2026-02-03 07:04:23.460997 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-03 07:04:23.461016 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-03 07:04:23.461035 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-03 07:04:23.461055 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.461076 | orchestrator | 2026-02-03 07:04:23.461094 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 07:04:23.461113 | orchestrator | Tuesday 03 February 2026 07:03:45 +0000 (0:00:01.238) 1:08:59.142 ****** 2026-02-03 07:04:23.461131 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-03 07:04:23.461153 | orchestrator | 2026-02-03 07:04:23.461176 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 07:04:23.461196 | orchestrator | Tuesday 03 February 2026 07:03:47 +0000 (0:00:01.195) 1:09:00.337 ****** 2026-02-03 07:04:23.461215 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.461261 | orchestrator | 2026-02-03 07:04:23.461282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 07:04:23.461301 | orchestrator | Tuesday 03 February 2026 07:03:48 +0000 (0:00:01.219) 1:09:01.556 ****** 2026-02-03 07:04:23.461319 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.461337 | orchestrator | 2026-02-03 07:04:23.461353 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 07:04:23.461371 | orchestrator | Tuesday 03 February 2026 07:03:49 +0000 (0:00:01.331) 1:09:02.888 ****** 2026-02-03 07:04:23.461388 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.461407 | orchestrator | 2026-02-03 07:04:23.461426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 07:04:23.461445 | orchestrator | Tuesday 03 February 2026 07:03:50 +0000 (0:00:01.200) 1:09:04.089 ****** 2026-02-03 07:04:23.461497 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:04:23.461518 | orchestrator | 2026-02-03 07:04:23.461537 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 07:04:23.461555 | orchestrator | Tuesday 03 February 2026 07:03:52 +0000 (0:00:01.306) 1:09:05.395 ****** 2026-02-03 07:04:23.461574 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 07:04:23.461592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 07:04:23.461611 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 07:04:23.461629 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.461647 | orchestrator | 2026-02-03 07:04:23.461664 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 07:04:23.461682 | orchestrator | Tuesday 03 February 2026 07:03:53 +0000 (0:00:01.504) 1:09:06.900 ****** 2026-02-03 07:04:23.461701 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 07:04:23.461754 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 07:04:23.461773 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 07:04:23.461792 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.461810 | orchestrator | 2026-02-03 07:04:23.461830 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 07:04:23.461848 | orchestrator | Tuesday 03 February 2026 07:03:55 +0000 (0:00:01.490) 1:09:08.390 ****** 2026-02-03 07:04:23.461866 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 07:04:23.461886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 07:04:23.461905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 07:04:23.461924 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.461937 | orchestrator | 2026-02-03 07:04:23.461948 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 07:04:23.461960 | orchestrator | Tuesday 03 February 2026 07:03:56 +0000 (0:00:01.516) 1:09:09.907 ****** 2026-02-03 07:04:23.461971 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:04:23.461982 | orchestrator | 2026-02-03 07:04:23.461993 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 07:04:23.462004 | orchestrator | Tuesday 03 February 2026 07:03:57 +0000 (0:00:01.272) 1:09:11.179 ****** 2026-02-03 07:04:23.462015 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 07:04:23.462088 | orchestrator | 2026-02-03 07:04:23.462099 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 07:04:23.462110 | orchestrator | Tuesday 03 February 2026 07:03:59 +0000 (0:00:01.436) 1:09:12.616 ****** 2026-02-03 07:04:23.462148 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 07:04:23.462160 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 07:04:23.462171 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 07:04:23.462182 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 07:04:23.462192 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-03 07:04:23.462203 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 07:04:23.462214 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 07:04:23.462225 | orchestrator | 2026-02-03 07:04:23.462235 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 07:04:23.462246 | orchestrator | Tuesday 03 February 2026 07:04:01 +0000 (0:00:02.416) 1:09:15.033 ****** 2026-02-03 07:04:23.462257 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 07:04:23.462278 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 07:04:23.462289 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 07:04:23.462313 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 07:04:23.462323 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-03 07:04:23.462334 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-03 07:04:23.462345 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 07:04:23.462356 | orchestrator | 2026-02-03 07:04:23.462367 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-03 07:04:23.462378 | orchestrator | Tuesday 03 February 2026 07:04:04 +0000 (0:00:02.792) 1:09:17.826 ****** 2026-02-03 07:04:23.462388 | orchestrator | changed: [testbed-node-4] 2026-02-03 07:04:23.462399 | orchestrator | 2026-02-03 07:04:23.462410 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-03 07:04:23.462422 | orchestrator | Tuesday 03 February 2026 07:04:06 +0000 (0:00:02.133) 1:09:19.959 ****** 2026-02-03 07:04:23.462433 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 07:04:23.462444 | orchestrator | 2026-02-03 07:04:23.462455 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-03 07:04:23.462466 | orchestrator | Tuesday 03 February 2026 07:04:09 +0000 (0:00:02.591) 1:09:22.550 ****** 2026-02-03 07:04:23.462476 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 07:04:23.462487 | orchestrator | 2026-02-03 07:04:23.462498 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 07:04:23.462509 | orchestrator | Tuesday 03 February 2026 07:04:11 +0000 (0:00:02.143) 1:09:24.693 ****** 2026-02-03 07:04:23.462520 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-03 07:04:23.462531 | orchestrator | 2026-02-03 07:04:23.462541 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 07:04:23.462552 | orchestrator | Tuesday 03 February 2026 07:04:12 +0000 (0:00:01.173) 1:09:25.867 ****** 2026-02-03 07:04:23.462563 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-03 07:04:23.462574 | orchestrator | 2026-02-03 07:04:23.462585 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 07:04:23.462595 | orchestrator | Tuesday 03 February 2026 07:04:13 +0000 (0:00:01.216) 1:09:27.083 ****** 2026-02-03 07:04:23.462606 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.462617 | orchestrator | 2026-02-03 07:04:23.462628 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 07:04:23.462639 | orchestrator | Tuesday 03 February 2026 07:04:15 +0000 (0:00:01.202) 1:09:28.286 ****** 2026-02-03 07:04:23.462649 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:04:23.462660 | orchestrator | 2026-02-03 07:04:23.462671 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 07:04:23.462682 | orchestrator | Tuesday 03 February 2026 07:04:16 +0000 (0:00:01.558) 1:09:29.845 ****** 2026-02-03 07:04:23.462692 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:04:23.462703 | orchestrator | 2026-02-03 07:04:23.462746 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 07:04:23.462762 | orchestrator | Tuesday 03 February 2026 07:04:18 +0000 (0:00:01.569) 1:09:31.415 ****** 2026-02-03 07:04:23.462773 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:04:23.462784 | orchestrator | 2026-02-03 07:04:23.462795 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 07:04:23.462805 | orchestrator | Tuesday 03 February 2026 07:04:19 +0000 (0:00:01.606) 1:09:33.022 ****** 2026-02-03 07:04:23.462816 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.462827 | orchestrator | 2026-02-03 07:04:23.462837 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 07:04:23.462855 | orchestrator | Tuesday 03 February 2026 07:04:21 +0000 (0:00:01.176) 1:09:34.199 ****** 2026-02-03 07:04:23.462866 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.462877 | orchestrator | 2026-02-03 07:04:23.462887 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 07:04:23.462898 | orchestrator | Tuesday 03 February 2026 07:04:22 +0000 (0:00:01.271) 1:09:35.470 ****** 2026-02-03 07:04:23.462909 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:04:23.462920 | orchestrator | 2026-02-03 07:04:23.462931 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 07:04:23.462950 | orchestrator | Tuesday 03 February 2026 07:04:23 +0000 (0:00:01.163) 1:09:36.633 ****** 2026-02-03 07:05:05.908404 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.908523 | orchestrator | 2026-02-03 07:05:05.908541 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 07:05:05.908555 | orchestrator | Tuesday 03 February 2026 07:04:25 +0000 (0:00:01.681) 1:09:38.315 ****** 2026-02-03 07:05:05.908566 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.908577 | orchestrator | 2026-02-03 07:05:05.908589 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 07:05:05.908601 | orchestrator | Tuesday 03 February 2026 07:04:26 +0000 (0:00:01.678) 1:09:39.993 ****** 2026-02-03 07:05:05.908612 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.908624 | orchestrator | 2026-02-03 07:05:05.908635 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 07:05:05.908646 | orchestrator | Tuesday 03 February 2026 07:04:27 +0000 (0:00:00.807) 1:09:40.800 ****** 2026-02-03 07:05:05.908657 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.908669 | orchestrator | 2026-02-03 07:05:05.908696 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 07:05:05.908764 | orchestrator | Tuesday 03 February 2026 07:04:28 +0000 (0:00:00.911) 1:09:41.712 ****** 2026-02-03 07:05:05.908776 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.908788 | orchestrator | 2026-02-03 07:05:05.908799 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 07:05:05.908810 | orchestrator | Tuesday 03 February 2026 07:04:29 +0000 (0:00:00.852) 1:09:42.565 ****** 2026-02-03 07:05:05.908821 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.908832 | orchestrator | 2026-02-03 07:05:05.908843 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 07:05:05.908854 | orchestrator | Tuesday 03 February 2026 07:04:30 +0000 (0:00:00.874) 1:09:43.439 ****** 2026-02-03 07:05:05.908865 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.908876 | orchestrator | 2026-02-03 07:05:05.908887 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 07:05:05.908898 | orchestrator | Tuesday 03 February 2026 07:04:31 +0000 (0:00:00.846) 1:09:44.286 ****** 2026-02-03 07:05:05.908909 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.908920 | orchestrator | 2026-02-03 07:05:05.908933 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 07:05:05.908946 | orchestrator | Tuesday 03 February 2026 07:04:31 +0000 (0:00:00.831) 1:09:45.117 ****** 2026-02-03 07:05:05.908959 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.908973 | orchestrator | 2026-02-03 07:05:05.908987 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 07:05:05.909000 | orchestrator | Tuesday 03 February 2026 07:04:32 +0000 (0:00:00.842) 1:09:45.960 ****** 2026-02-03 07:05:05.909013 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909027 | orchestrator | 2026-02-03 07:05:05.909040 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 07:05:05.909054 | orchestrator | Tuesday 03 February 2026 07:04:33 +0000 (0:00:00.833) 1:09:46.793 ****** 2026-02-03 07:05:05.909066 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.909080 | orchestrator | 2026-02-03 07:05:05.909094 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 07:05:05.909131 | orchestrator | Tuesday 03 February 2026 07:04:34 +0000 (0:00:00.833) 1:09:47.627 ****** 2026-02-03 07:05:05.909147 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.909160 | orchestrator | 2026-02-03 07:05:05.909173 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 07:05:05.909187 | orchestrator | Tuesday 03 February 2026 07:04:35 +0000 (0:00:00.842) 1:09:48.469 ****** 2026-02-03 07:05:05.909201 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909213 | orchestrator | 2026-02-03 07:05:05.909228 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 07:05:05.909241 | orchestrator | Tuesday 03 February 2026 07:04:36 +0000 (0:00:00.806) 1:09:49.276 ****** 2026-02-03 07:05:05.909254 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909269 | orchestrator | 2026-02-03 07:05:05.909283 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 07:05:05.909296 | orchestrator | Tuesday 03 February 2026 07:04:37 +0000 (0:00:01.187) 1:09:50.463 ****** 2026-02-03 07:05:05.909307 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909318 | orchestrator | 2026-02-03 07:05:05.909330 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 07:05:05.909341 | orchestrator | Tuesday 03 February 2026 07:04:38 +0000 (0:00:00.832) 1:09:51.296 ****** 2026-02-03 07:05:05.909352 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909363 | orchestrator | 2026-02-03 07:05:05.909374 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 07:05:05.909385 | orchestrator | Tuesday 03 February 2026 07:04:38 +0000 (0:00:00.807) 1:09:52.104 ****** 2026-02-03 07:05:05.909396 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909407 | orchestrator | 2026-02-03 07:05:05.909418 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 07:05:05.909430 | orchestrator | Tuesday 03 February 2026 07:04:39 +0000 (0:00:00.828) 1:09:52.932 ****** 2026-02-03 07:05:05.909441 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909452 | orchestrator | 2026-02-03 07:05:05.909463 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 07:05:05.909474 | orchestrator | Tuesday 03 February 2026 07:04:40 +0000 (0:00:00.844) 1:09:53.777 ****** 2026-02-03 07:05:05.909485 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909496 | orchestrator | 2026-02-03 07:05:05.909507 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 07:05:05.909519 | orchestrator | Tuesday 03 February 2026 07:04:41 +0000 (0:00:00.799) 1:09:54.576 ****** 2026-02-03 07:05:05.909530 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909541 | orchestrator | 2026-02-03 07:05:05.909552 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 07:05:05.909564 | orchestrator | Tuesday 03 February 2026 07:04:42 +0000 (0:00:00.817) 1:09:55.393 ****** 2026-02-03 07:05:05.909575 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909586 | orchestrator | 2026-02-03 07:05:05.909615 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 07:05:05.909627 | orchestrator | Tuesday 03 February 2026 07:04:43 +0000 (0:00:00.813) 1:09:56.207 ****** 2026-02-03 07:05:05.909638 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909649 | orchestrator | 2026-02-03 07:05:05.909660 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 07:05:05.909672 | orchestrator | Tuesday 03 February 2026 07:04:43 +0000 (0:00:00.805) 1:09:57.013 ****** 2026-02-03 07:05:05.909683 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909694 | orchestrator | 2026-02-03 07:05:05.909724 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 07:05:05.909736 | orchestrator | Tuesday 03 February 2026 07:04:44 +0000 (0:00:00.810) 1:09:57.823 ****** 2026-02-03 07:05:05.909747 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909758 | orchestrator | 2026-02-03 07:05:05.909769 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 07:05:05.909788 | orchestrator | Tuesday 03 February 2026 07:04:45 +0000 (0:00:00.802) 1:09:58.626 ****** 2026-02-03 07:05:05.909806 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.909817 | orchestrator | 2026-02-03 07:05:05.909828 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 07:05:05.909839 | orchestrator | Tuesday 03 February 2026 07:04:47 +0000 (0:00:01.617) 1:10:00.244 ****** 2026-02-03 07:05:05.909850 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.909861 | orchestrator | 2026-02-03 07:05:05.909871 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 07:05:05.909882 | orchestrator | Tuesday 03 February 2026 07:04:48 +0000 (0:00:01.905) 1:10:02.149 ****** 2026-02-03 07:05:05.909894 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-03 07:05:05.909906 | orchestrator | 2026-02-03 07:05:05.909917 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 07:05:05.909928 | orchestrator | Tuesday 03 February 2026 07:04:50 +0000 (0:00:01.433) 1:10:03.582 ****** 2026-02-03 07:05:05.909939 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909950 | orchestrator | 2026-02-03 07:05:05.909961 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 07:05:05.909972 | orchestrator | Tuesday 03 February 2026 07:04:51 +0000 (0:00:01.205) 1:10:04.788 ****** 2026-02-03 07:05:05.909983 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.909994 | orchestrator | 2026-02-03 07:05:05.910004 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 07:05:05.910092 | orchestrator | Tuesday 03 February 2026 07:04:52 +0000 (0:00:01.236) 1:10:06.025 ****** 2026-02-03 07:05:05.910106 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 07:05:05.910118 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 07:05:05.910129 | orchestrator | 2026-02-03 07:05:05.910140 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 07:05:05.910151 | orchestrator | Tuesday 03 February 2026 07:04:54 +0000 (0:00:01.934) 1:10:07.960 ****** 2026-02-03 07:05:05.910162 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.910173 | orchestrator | 2026-02-03 07:05:05.910184 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 07:05:05.910195 | orchestrator | Tuesday 03 February 2026 07:04:56 +0000 (0:00:01.591) 1:10:09.551 ****** 2026-02-03 07:05:05.910206 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.910217 | orchestrator | 2026-02-03 07:05:05.910228 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 07:05:05.910239 | orchestrator | Tuesday 03 February 2026 07:04:57 +0000 (0:00:01.243) 1:10:10.795 ****** 2026-02-03 07:05:05.910250 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.910260 | orchestrator | 2026-02-03 07:05:05.910271 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 07:05:05.910282 | orchestrator | Tuesday 03 February 2026 07:04:58 +0000 (0:00:00.825) 1:10:11.621 ****** 2026-02-03 07:05:05.910293 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.910304 | orchestrator | 2026-02-03 07:05:05.910315 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 07:05:05.910325 | orchestrator | Tuesday 03 February 2026 07:04:59 +0000 (0:00:00.785) 1:10:12.406 ****** 2026-02-03 07:05:05.910336 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-03 07:05:05.910347 | orchestrator | 2026-02-03 07:05:05.910358 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 07:05:05.910369 | orchestrator | Tuesday 03 February 2026 07:05:00 +0000 (0:00:01.191) 1:10:13.597 ****** 2026-02-03 07:05:05.910380 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:05.910391 | orchestrator | 2026-02-03 07:05:05.910402 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 07:05:05.910420 | orchestrator | Tuesday 03 February 2026 07:05:02 +0000 (0:00:01.767) 1:10:15.365 ****** 2026-02-03 07:05:05.910431 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 07:05:05.910442 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 07:05:05.910453 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 07:05:05.910464 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.910475 | orchestrator | 2026-02-03 07:05:05.910485 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 07:05:05.910497 | orchestrator | Tuesday 03 February 2026 07:05:03 +0000 (0:00:01.199) 1:10:16.565 ****** 2026-02-03 07:05:05.910507 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.910518 | orchestrator | 2026-02-03 07:05:05.910529 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 07:05:05.910541 | orchestrator | Tuesday 03 February 2026 07:05:04 +0000 (0:00:01.251) 1:10:17.816 ****** 2026-02-03 07:05:05.910560 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:05.910578 | orchestrator | 2026-02-03 07:05:05.910606 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 07:05:51.442383 | orchestrator | Tuesday 03 February 2026 07:05:05 +0000 (0:00:01.265) 1:10:19.081 ****** 2026-02-03 07:05:51.442504 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.442522 | orchestrator | 2026-02-03 07:05:51.442536 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 07:05:51.442547 | orchestrator | Tuesday 03 February 2026 07:05:07 +0000 (0:00:01.229) 1:10:20.311 ****** 2026-02-03 07:05:51.442559 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.442570 | orchestrator | 2026-02-03 07:05:51.442581 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 07:05:51.442593 | orchestrator | Tuesday 03 February 2026 07:05:08 +0000 (0:00:01.198) 1:10:21.510 ****** 2026-02-03 07:05:51.442604 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.442615 | orchestrator | 2026-02-03 07:05:51.442626 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 07:05:51.442654 | orchestrator | Tuesday 03 February 2026 07:05:09 +0000 (0:00:00.853) 1:10:22.363 ****** 2026-02-03 07:05:51.442666 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:51.442678 | orchestrator | 2026-02-03 07:05:51.442689 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 07:05:51.442724 | orchestrator | Tuesday 03 February 2026 07:05:11 +0000 (0:00:02.238) 1:10:24.601 ****** 2026-02-03 07:05:51.442736 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:51.442747 | orchestrator | 2026-02-03 07:05:51.442758 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 07:05:51.442769 | orchestrator | Tuesday 03 February 2026 07:05:12 +0000 (0:00:00.908) 1:10:25.510 ****** 2026-02-03 07:05:51.442780 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-03 07:05:51.442792 | orchestrator | 2026-02-03 07:05:51.442802 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 07:05:51.442814 | orchestrator | Tuesday 03 February 2026 07:05:13 +0000 (0:00:01.172) 1:10:26.682 ****** 2026-02-03 07:05:51.442825 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.442836 | orchestrator | 2026-02-03 07:05:51.442847 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 07:05:51.442858 | orchestrator | Tuesday 03 February 2026 07:05:14 +0000 (0:00:01.359) 1:10:28.042 ****** 2026-02-03 07:05:51.442869 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.442880 | orchestrator | 2026-02-03 07:05:51.442890 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 07:05:51.442902 | orchestrator | Tuesday 03 February 2026 07:05:16 +0000 (0:00:01.303) 1:10:29.346 ****** 2026-02-03 07:05:51.442913 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.442928 | orchestrator | 2026-02-03 07:05:51.442964 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 07:05:51.442978 | orchestrator | Tuesday 03 February 2026 07:05:17 +0000 (0:00:01.202) 1:10:30.549 ****** 2026-02-03 07:05:51.442990 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443003 | orchestrator | 2026-02-03 07:05:51.443016 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 07:05:51.443029 | orchestrator | Tuesday 03 February 2026 07:05:18 +0000 (0:00:01.378) 1:10:31.928 ****** 2026-02-03 07:05:51.443042 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443054 | orchestrator | 2026-02-03 07:05:51.443067 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 07:05:51.443079 | orchestrator | Tuesday 03 February 2026 07:05:20 +0000 (0:00:01.322) 1:10:33.250 ****** 2026-02-03 07:05:51.443091 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443104 | orchestrator | 2026-02-03 07:05:51.443117 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 07:05:51.443130 | orchestrator | Tuesday 03 February 2026 07:05:21 +0000 (0:00:01.274) 1:10:34.525 ****** 2026-02-03 07:05:51.443143 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443156 | orchestrator | 2026-02-03 07:05:51.443168 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 07:05:51.443180 | orchestrator | Tuesday 03 February 2026 07:05:22 +0000 (0:00:01.232) 1:10:35.757 ****** 2026-02-03 07:05:51.443194 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443207 | orchestrator | 2026-02-03 07:05:51.443220 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 07:05:51.443233 | orchestrator | Tuesday 03 February 2026 07:05:23 +0000 (0:00:01.242) 1:10:37.000 ****** 2026-02-03 07:05:51.443246 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:05:51.443258 | orchestrator | 2026-02-03 07:05:51.443271 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 07:05:51.443285 | orchestrator | Tuesday 03 February 2026 07:05:24 +0000 (0:00:00.888) 1:10:37.889 ****** 2026-02-03 07:05:51.443297 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-03 07:05:51.443309 | orchestrator | 2026-02-03 07:05:51.443321 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 07:05:51.443331 | orchestrator | Tuesday 03 February 2026 07:05:26 +0000 (0:00:01.344) 1:10:39.234 ****** 2026-02-03 07:05:51.443342 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-03 07:05:51.443354 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-03 07:05:51.443365 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-03 07:05:51.443376 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-03 07:05:51.443387 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-03 07:05:51.443397 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-03 07:05:51.443408 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-03 07:05:51.443419 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-03 07:05:51.443431 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 07:05:51.443442 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 07:05:51.443453 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 07:05:51.443482 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 07:05:51.443494 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 07:05:51.443505 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 07:05:51.443516 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-03 07:05:51.443527 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-03 07:05:51.443538 | orchestrator | 2026-02-03 07:05:51.443549 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 07:05:51.443568 | orchestrator | Tuesday 03 February 2026 07:05:32 +0000 (0:00:06.546) 1:10:45.780 ****** 2026-02-03 07:05:51.443579 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-03 07:05:51.443591 | orchestrator | 2026-02-03 07:05:51.443602 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-03 07:05:51.443618 | orchestrator | Tuesday 03 February 2026 07:05:33 +0000 (0:00:01.196) 1:10:46.977 ****** 2026-02-03 07:05:51.443630 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 07:05:51.443642 | orchestrator | 2026-02-03 07:05:51.443653 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-03 07:05:51.443664 | orchestrator | Tuesday 03 February 2026 07:05:35 +0000 (0:00:01.645) 1:10:48.623 ****** 2026-02-03 07:05:51.443676 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 07:05:51.443687 | orchestrator | 2026-02-03 07:05:51.443698 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 07:05:51.443724 | orchestrator | Tuesday 03 February 2026 07:05:37 +0000 (0:00:01.701) 1:10:50.325 ****** 2026-02-03 07:05:51.443735 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443746 | orchestrator | 2026-02-03 07:05:51.443757 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 07:05:51.443768 | orchestrator | Tuesday 03 February 2026 07:05:38 +0000 (0:00:00.898) 1:10:51.224 ****** 2026-02-03 07:05:51.443779 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443790 | orchestrator | 2026-02-03 07:05:51.443801 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 07:05:51.443812 | orchestrator | Tuesday 03 February 2026 07:05:38 +0000 (0:00:00.774) 1:10:51.998 ****** 2026-02-03 07:05:51.443823 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443834 | orchestrator | 2026-02-03 07:05:51.443845 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 07:05:51.443856 | orchestrator | Tuesday 03 February 2026 07:05:39 +0000 (0:00:00.834) 1:10:52.832 ****** 2026-02-03 07:05:51.443867 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443878 | orchestrator | 2026-02-03 07:05:51.443889 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 07:05:51.443900 | orchestrator | Tuesday 03 February 2026 07:05:40 +0000 (0:00:00.850) 1:10:53.682 ****** 2026-02-03 07:05:51.443911 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443922 | orchestrator | 2026-02-03 07:05:51.443933 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 07:05:51.443944 | orchestrator | Tuesday 03 February 2026 07:05:41 +0000 (0:00:00.779) 1:10:54.462 ****** 2026-02-03 07:05:51.443955 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.443966 | orchestrator | 2026-02-03 07:05:51.443977 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 07:05:51.443988 | orchestrator | Tuesday 03 February 2026 07:05:42 +0000 (0:00:00.797) 1:10:55.260 ****** 2026-02-03 07:05:51.443999 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.444010 | orchestrator | 2026-02-03 07:05:51.444021 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 07:05:51.444032 | orchestrator | Tuesday 03 February 2026 07:05:42 +0000 (0:00:00.846) 1:10:56.106 ****** 2026-02-03 07:05:51.444043 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.444054 | orchestrator | 2026-02-03 07:05:51.444065 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 07:05:51.444076 | orchestrator | Tuesday 03 February 2026 07:05:43 +0000 (0:00:00.859) 1:10:56.966 ****** 2026-02-03 07:05:51.444087 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.444098 | orchestrator | 2026-02-03 07:05:51.444109 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 07:05:51.444126 | orchestrator | Tuesday 03 February 2026 07:05:44 +0000 (0:00:00.880) 1:10:57.846 ****** 2026-02-03 07:05:51.444137 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.444148 | orchestrator | 2026-02-03 07:05:51.444159 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 07:05:51.444171 | orchestrator | Tuesday 03 February 2026 07:05:45 +0000 (0:00:00.832) 1:10:58.679 ****** 2026-02-03 07:05:51.444182 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:05:51.444193 | orchestrator | 2026-02-03 07:05:51.444204 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 07:05:51.444215 | orchestrator | Tuesday 03 February 2026 07:05:46 +0000 (0:00:00.830) 1:10:59.510 ****** 2026-02-03 07:05:51.444225 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-03 07:05:51.444236 | orchestrator | 2026-02-03 07:05:51.444247 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 07:05:51.444258 | orchestrator | Tuesday 03 February 2026 07:05:50 +0000 (0:00:04.242) 1:11:03.752 ****** 2026-02-03 07:05:51.444269 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 07:05:51.444281 | orchestrator | 2026-02-03 07:05:51.444299 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 07:06:34.577130 | orchestrator | Tuesday 03 February 2026 07:05:51 +0000 (0:00:00.865) 1:11:04.618 ****** 2026-02-03 07:06:34.577252 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-03 07:06:34.577288 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-03 07:06:34.577302 | orchestrator | 2026-02-03 07:06:34.577314 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 07:06:34.577325 | orchestrator | Tuesday 03 February 2026 07:05:56 +0000 (0:00:05.178) 1:11:09.796 ****** 2026-02-03 07:06:34.577336 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.577348 | orchestrator | 2026-02-03 07:06:34.577359 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 07:06:34.577370 | orchestrator | Tuesday 03 February 2026 07:05:57 +0000 (0:00:00.817) 1:11:10.614 ****** 2026-02-03 07:06:34.577381 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.577392 | orchestrator | 2026-02-03 07:06:34.577403 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 07:06:34.577415 | orchestrator | Tuesday 03 February 2026 07:05:58 +0000 (0:00:00.810) 1:11:11.425 ****** 2026-02-03 07:06:34.577426 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.577437 | orchestrator | 2026-02-03 07:06:34.577447 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 07:06:34.577458 | orchestrator | Tuesday 03 February 2026 07:05:59 +0000 (0:00:00.824) 1:11:12.250 ****** 2026-02-03 07:06:34.577469 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.577480 | orchestrator | 2026-02-03 07:06:34.577491 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 07:06:34.577501 | orchestrator | Tuesday 03 February 2026 07:05:59 +0000 (0:00:00.922) 1:11:13.172 ****** 2026-02-03 07:06:34.577512 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.577523 | orchestrator | 2026-02-03 07:06:34.577534 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 07:06:34.577570 | orchestrator | Tuesday 03 February 2026 07:06:00 +0000 (0:00:00.863) 1:11:14.036 ****** 2026-02-03 07:06:34.577581 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:06:34.577593 | orchestrator | 2026-02-03 07:06:34.577604 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 07:06:34.577614 | orchestrator | Tuesday 03 February 2026 07:06:01 +0000 (0:00:00.933) 1:11:14.969 ****** 2026-02-03 07:06:34.577625 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 07:06:34.577636 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 07:06:34.577647 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 07:06:34.577658 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.577671 | orchestrator | 2026-02-03 07:06:34.577685 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 07:06:34.577733 | orchestrator | Tuesday 03 February 2026 07:06:02 +0000 (0:00:01.172) 1:11:16.142 ****** 2026-02-03 07:06:34.577755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 07:06:34.577776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 07:06:34.577797 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 07:06:34.577810 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.577822 | orchestrator | 2026-02-03 07:06:34.577836 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 07:06:34.577849 | orchestrator | Tuesday 03 February 2026 07:06:04 +0000 (0:00:01.168) 1:11:17.311 ****** 2026-02-03 07:06:34.577862 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-03 07:06:34.577875 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-03 07:06:34.577887 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-03 07:06:34.577900 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.577913 | orchestrator | 2026-02-03 07:06:34.577926 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 07:06:34.577939 | orchestrator | Tuesday 03 February 2026 07:06:05 +0000 (0:00:01.243) 1:11:18.554 ****** 2026-02-03 07:06:34.577951 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:06:34.577961 | orchestrator | 2026-02-03 07:06:34.577972 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 07:06:34.577983 | orchestrator | Tuesday 03 February 2026 07:06:06 +0000 (0:00:00.850) 1:11:19.405 ****** 2026-02-03 07:06:34.577993 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-03 07:06:34.578004 | orchestrator | 2026-02-03 07:06:34.578015 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 07:06:34.578089 | orchestrator | Tuesday 03 February 2026 07:06:07 +0000 (0:00:01.049) 1:11:20.455 ****** 2026-02-03 07:06:34.578101 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:06:34.578111 | orchestrator | 2026-02-03 07:06:34.578122 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-03 07:06:34.578133 | orchestrator | Tuesday 03 February 2026 07:06:08 +0000 (0:00:01.649) 1:11:22.104 ****** 2026-02-03 07:06:34.578144 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-03 07:06:34.578155 | orchestrator | 2026-02-03 07:06:34.578195 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-03 07:06:34.578207 | orchestrator | Tuesday 03 February 2026 07:06:10 +0000 (0:00:01.165) 1:11:23.270 ****** 2026-02-03 07:06:34.578218 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 07:06:34.578229 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-03 07:06:34.578240 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 07:06:34.578251 | orchestrator | 2026-02-03 07:06:34.578262 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-03 07:06:34.578273 | orchestrator | Tuesday 03 February 2026 07:06:13 +0000 (0:00:03.288) 1:11:26.558 ****** 2026-02-03 07:06:34.578285 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-03 07:06:34.578306 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-03 07:06:34.578317 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:06:34.578328 | orchestrator | 2026-02-03 07:06:34.578346 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-03 07:06:34.578357 | orchestrator | Tuesday 03 February 2026 07:06:15 +0000 (0:00:02.115) 1:11:28.674 ****** 2026-02-03 07:06:34.578368 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.578379 | orchestrator | 2026-02-03 07:06:34.578390 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-03 07:06:34.578401 | orchestrator | Tuesday 03 February 2026 07:06:16 +0000 (0:00:00.800) 1:11:29.474 ****** 2026-02-03 07:06:34.578412 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-03 07:06:34.578423 | orchestrator | 2026-02-03 07:06:34.578434 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-03 07:06:34.578445 | orchestrator | Tuesday 03 February 2026 07:06:17 +0000 (0:00:01.166) 1:11:30.641 ****** 2026-02-03 07:06:34.578456 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 07:06:34.578469 | orchestrator | 2026-02-03 07:06:34.578480 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-03 07:06:34.578491 | orchestrator | Tuesday 03 February 2026 07:06:19 +0000 (0:00:01.732) 1:11:32.374 ****** 2026-02-03 07:06:34.578502 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 07:06:34.578513 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-03 07:06:34.578524 | orchestrator | 2026-02-03 07:06:34.578535 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-03 07:06:34.578546 | orchestrator | Tuesday 03 February 2026 07:06:24 +0000 (0:00:05.264) 1:11:37.638 ****** 2026-02-03 07:06:34.578556 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 07:06:34.578567 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 07:06:34.578578 | orchestrator | 2026-02-03 07:06:34.578589 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-03 07:06:34.578600 | orchestrator | Tuesday 03 February 2026 07:06:27 +0000 (0:00:03.359) 1:11:40.998 ****** 2026-02-03 07:06:34.578611 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-03 07:06:34.578622 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:06:34.578633 | orchestrator | 2026-02-03 07:06:34.578644 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-03 07:06:34.578655 | orchestrator | Tuesday 03 February 2026 07:06:29 +0000 (0:00:01.764) 1:11:42.762 ****** 2026-02-03 07:06:34.578666 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-03 07:06:34.578677 | orchestrator | 2026-02-03 07:06:34.578688 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-03 07:06:34.578724 | orchestrator | Tuesday 03 February 2026 07:06:31 +0000 (0:00:01.529) 1:11:44.292 ****** 2026-02-03 07:06:34.578745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:06:34.578766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:06:34.578786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:06:34.578805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:06:34.578817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:06:34.578836 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:06:34.578847 | orchestrator | 2026-02-03 07:06:34.578857 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-03 07:06:34.578868 | orchestrator | Tuesday 03 February 2026 07:06:32 +0000 (0:00:01.720) 1:11:46.013 ****** 2026-02-03 07:06:34.578879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:06:34.578890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:06:34.578901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:06:34.578920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:07:43.624457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:07:43.624567 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:07:43.624582 | orchestrator | 2026-02-03 07:07:43.624592 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-03 07:07:43.624602 | orchestrator | Tuesday 03 February 2026 07:06:34 +0000 (0:00:01.724) 1:11:47.738 ****** 2026-02-03 07:07:43.624610 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:07:43.624631 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:07:43.624639 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:07:43.624648 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:07:43.624657 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:07:43.624665 | orchestrator | 2026-02-03 07:07:43.624674 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-03 07:07:43.624682 | orchestrator | Tuesday 03 February 2026 07:07:06 +0000 (0:00:32.176) 1:12:19.915 ****** 2026-02-03 07:07:43.624721 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:07:43.624733 | orchestrator | 2026-02-03 07:07:43.624741 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-03 07:07:43.624749 | orchestrator | Tuesday 03 February 2026 07:07:07 +0000 (0:00:00.813) 1:12:20.729 ****** 2026-02-03 07:07:43.624757 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:07:43.624765 | orchestrator | 2026-02-03 07:07:43.624773 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-03 07:07:43.624781 | orchestrator | Tuesday 03 February 2026 07:07:08 +0000 (0:00:00.813) 1:12:21.542 ****** 2026-02-03 07:07:43.624789 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-03 07:07:43.624798 | orchestrator | 2026-02-03 07:07:43.624806 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-03 07:07:43.624814 | orchestrator | Tuesday 03 February 2026 07:07:09 +0000 (0:00:01.124) 1:12:22.667 ****** 2026-02-03 07:07:43.624821 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-03 07:07:43.624829 | orchestrator | 2026-02-03 07:07:43.624837 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-03 07:07:43.624845 | orchestrator | Tuesday 03 February 2026 07:07:10 +0000 (0:00:01.193) 1:12:23.860 ****** 2026-02-03 07:07:43.624871 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:07:43.624881 | orchestrator | 2026-02-03 07:07:43.624890 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-03 07:07:43.624898 | orchestrator | Tuesday 03 February 2026 07:07:12 +0000 (0:00:02.154) 1:12:26.015 ****** 2026-02-03 07:07:43.624905 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:07:43.624916 | orchestrator | 2026-02-03 07:07:43.624929 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-03 07:07:43.624944 | orchestrator | Tuesday 03 February 2026 07:07:14 +0000 (0:00:02.064) 1:12:28.079 ****** 2026-02-03 07:07:43.624958 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:07:43.624971 | orchestrator | 2026-02-03 07:07:43.624985 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-03 07:07:43.624998 | orchestrator | Tuesday 03 February 2026 07:07:17 +0000 (0:00:02.433) 1:12:30.512 ****** 2026-02-03 07:07:43.625011 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-03 07:07:43.625025 | orchestrator | 2026-02-03 07:07:43.625039 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-03 07:07:43.625052 | orchestrator | 2026-02-03 07:07:43.625065 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 07:07:43.625079 | orchestrator | Tuesday 03 February 2026 07:07:20 +0000 (0:00:03.413) 1:12:33.926 ****** 2026-02-03 07:07:43.625093 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-03 07:07:43.625107 | orchestrator | 2026-02-03 07:07:43.625122 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-03 07:07:43.625135 | orchestrator | Tuesday 03 February 2026 07:07:21 +0000 (0:00:01.208) 1:12:35.135 ****** 2026-02-03 07:07:43.625150 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:07:43.625163 | orchestrator | 2026-02-03 07:07:43.625173 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-03 07:07:43.625185 | orchestrator | Tuesday 03 February 2026 07:07:23 +0000 (0:00:01.479) 1:12:36.615 ****** 2026-02-03 07:07:43.625198 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:07:43.625213 | orchestrator | 2026-02-03 07:07:43.625227 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 07:07:43.625240 | orchestrator | Tuesday 03 February 2026 07:07:24 +0000 (0:00:01.179) 1:12:37.794 ****** 2026-02-03 07:07:43.625254 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:07:43.625267 | orchestrator | 2026-02-03 07:07:43.625282 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 07:07:43.625295 | orchestrator | Tuesday 03 February 2026 07:07:26 +0000 (0:00:01.548) 1:12:39.342 ****** 2026-02-03 07:07:43.625309 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:07:43.625322 | orchestrator | 2026-02-03 07:07:43.625355 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-03 07:07:43.625369 | orchestrator | Tuesday 03 February 2026 07:07:27 +0000 (0:00:01.138) 1:12:40.481 ****** 2026-02-03 07:07:43.625383 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:07:43.625424 | orchestrator | 2026-02-03 07:07:43.625439 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-03 07:07:43.625452 | orchestrator | Tuesday 03 February 2026 07:07:28 +0000 (0:00:01.170) 1:12:41.651 ****** 2026-02-03 07:07:43.625467 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:07:43.625480 | orchestrator | 2026-02-03 07:07:43.625494 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-03 07:07:43.625510 | orchestrator | Tuesday 03 February 2026 07:07:29 +0000 (0:00:01.199) 1:12:42.850 ****** 2026-02-03 07:07:43.625523 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:07:43.625537 | orchestrator | 2026-02-03 07:07:43.625559 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-03 07:07:43.625567 | orchestrator | Tuesday 03 February 2026 07:07:30 +0000 (0:00:01.281) 1:12:44.132 ****** 2026-02-03 07:07:43.625575 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:07:43.625591 | orchestrator | 2026-02-03 07:07:43.625599 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-03 07:07:43.625607 | orchestrator | Tuesday 03 February 2026 07:07:32 +0000 (0:00:01.376) 1:12:45.509 ****** 2026-02-03 07:07:43.625615 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 07:07:43.625623 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 07:07:43.625631 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 07:07:43.625639 | orchestrator | 2026-02-03 07:07:43.625647 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-03 07:07:43.625655 | orchestrator | Tuesday 03 February 2026 07:07:34 +0000 (0:00:02.338) 1:12:47.848 ****** 2026-02-03 07:07:43.625662 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:07:43.625670 | orchestrator | 2026-02-03 07:07:43.625678 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-03 07:07:43.625686 | orchestrator | Tuesday 03 February 2026 07:07:36 +0000 (0:00:01.370) 1:12:49.219 ****** 2026-02-03 07:07:43.625722 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 07:07:43.625736 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 07:07:43.625751 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 07:07:43.625765 | orchestrator | 2026-02-03 07:07:43.625780 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-03 07:07:43.625794 | orchestrator | Tuesday 03 February 2026 07:07:39 +0000 (0:00:03.041) 1:12:52.261 ****** 2026-02-03 07:07:43.625807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-03 07:07:43.625823 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-03 07:07:43.625838 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-03 07:07:43.625854 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:07:43.625870 | orchestrator | 2026-02-03 07:07:43.625885 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-03 07:07:43.625899 | orchestrator | Tuesday 03 February 2026 07:07:40 +0000 (0:00:01.489) 1:12:53.750 ****** 2026-02-03 07:07:43.625915 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-03 07:07:43.626061 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-03 07:07:43.626087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-03 07:07:43.626099 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:07:43.626107 | orchestrator | 2026-02-03 07:07:43.626115 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-03 07:07:43.626124 | orchestrator | Tuesday 03 February 2026 07:07:42 +0000 (0:00:01.764) 1:12:55.515 ****** 2026-02-03 07:07:43.626140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 07:07:43.626169 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:04.220778 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:04.220888 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:04.220905 | orchestrator | 2026-02-03 07:08:04.220936 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-03 07:08:04.220950 | orchestrator | Tuesday 03 February 2026 07:07:43 +0000 (0:00:01.283) 1:12:56.799 ****** 2026-02-03 07:08:04.220963 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'fc9af7e241e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-03 07:07:36.629459', 'end': '2026-02-03 07:07:36.684053', 'delta': '0:00:00.054594', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9af7e241e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-03 07:08:04.220978 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a8f198eef309', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-03 07:07:37.241154', 'end': '2026-02-03 07:07:37.291612', 'delta': '0:00:00.050458', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a8f198eef309'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-03 07:08:04.220990 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '79d18794d8bb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-03 07:07:37.839912', 'end': '2026-02-03 07:07:37.883834', 'delta': '0:00:00.043922', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['79d18794d8bb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-03 07:08:04.221001 | orchestrator | 2026-02-03 07:08:04.221013 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-03 07:08:04.221024 | orchestrator | Tuesday 03 February 2026 07:07:44 +0000 (0:00:01.276) 1:12:58.075 ****** 2026-02-03 07:08:04.221035 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:08:04.221047 | orchestrator | 2026-02-03 07:08:04.221058 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-03 07:08:04.221069 | orchestrator | Tuesday 03 February 2026 07:07:46 +0000 (0:00:01.398) 1:12:59.473 ****** 2026-02-03 07:08:04.221079 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:04.221090 | orchestrator | 2026-02-03 07:08:04.221124 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-03 07:08:04.221136 | orchestrator | Tuesday 03 February 2026 07:07:47 +0000 (0:00:01.419) 1:13:00.892 ****** 2026-02-03 07:08:04.221147 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:08:04.221158 | orchestrator | 2026-02-03 07:08:04.221169 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-03 07:08:04.221180 | orchestrator | Tuesday 03 February 2026 07:07:48 +0000 (0:00:01.172) 1:13:02.065 ****** 2026-02-03 07:08:04.221191 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-03 07:08:04.221202 | orchestrator | 2026-02-03 07:08:04.221213 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 07:08:04.221224 | orchestrator | Tuesday 03 February 2026 07:07:51 +0000 (0:00:02.143) 1:13:04.209 ****** 2026-02-03 07:08:04.221234 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:08:04.221245 | orchestrator | 2026-02-03 07:08:04.221256 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-03 07:08:04.221267 | orchestrator | Tuesday 03 February 2026 07:07:52 +0000 (0:00:01.235) 1:13:05.445 ****** 2026-02-03 07:08:04.221296 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:04.221308 | orchestrator | 2026-02-03 07:08:04.221319 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-03 07:08:04.221330 | orchestrator | Tuesday 03 February 2026 07:07:53 +0000 (0:00:01.196) 1:13:06.642 ****** 2026-02-03 07:08:04.221340 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:04.221351 | orchestrator | 2026-02-03 07:08:04.221362 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-03 07:08:04.221373 | orchestrator | Tuesday 03 February 2026 07:07:55 +0000 (0:00:01.939) 1:13:08.581 ****** 2026-02-03 07:08:04.221384 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:04.221395 | orchestrator | 2026-02-03 07:08:04.221406 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-03 07:08:04.221422 | orchestrator | Tuesday 03 February 2026 07:07:56 +0000 (0:00:01.188) 1:13:09.770 ****** 2026-02-03 07:08:04.221433 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:04.221444 | orchestrator | 2026-02-03 07:08:04.221455 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-03 07:08:04.221466 | orchestrator | Tuesday 03 February 2026 07:07:57 +0000 (0:00:01.181) 1:13:10.951 ****** 2026-02-03 07:08:04.221477 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:08:04.221488 | orchestrator | 2026-02-03 07:08:04.221498 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-03 07:08:04.221509 | orchestrator | Tuesday 03 February 2026 07:07:58 +0000 (0:00:01.200) 1:13:12.152 ****** 2026-02-03 07:08:04.221520 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:04.221531 | orchestrator | 2026-02-03 07:08:04.221542 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-03 07:08:04.221553 | orchestrator | Tuesday 03 February 2026 07:08:00 +0000 (0:00:01.167) 1:13:13.319 ****** 2026-02-03 07:08:04.221564 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:08:04.221575 | orchestrator | 2026-02-03 07:08:04.221586 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-03 07:08:04.221596 | orchestrator | Tuesday 03 February 2026 07:08:01 +0000 (0:00:01.307) 1:13:14.627 ****** 2026-02-03 07:08:04.221607 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:04.221618 | orchestrator | 2026-02-03 07:08:04.221629 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-03 07:08:04.221641 | orchestrator | Tuesday 03 February 2026 07:08:02 +0000 (0:00:01.217) 1:13:15.844 ****** 2026-02-03 07:08:04.221652 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:08:04.221663 | orchestrator | 2026-02-03 07:08:04.221673 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-03 07:08:04.221712 | orchestrator | Tuesday 03 February 2026 07:08:03 +0000 (0:00:01.258) 1:13:17.103 ****** 2026-02-03 07:08:04.221727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:08:04.221748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'uuids': ['de4b76bf-9af2-40ae-a6b3-4edbecd71396'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a']}})  2026-02-03 07:08:04.221761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ed5f26b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 07:08:04.221854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb']}})  2026-02-03 07:08:05.466551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:08:05.466652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:08:05.466668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-03 07:08:05.466754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:08:05.466768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca', 'dm-uuid-CRYPT-LUKS2-828a04c154134531b57bb1d5e612c63b-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 07:08:05.466780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:08:05.466793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'uuids': ['828a04c1-5413-4531-b57b-b1d5e612c63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca']}})  2026-02-03 07:08:05.466833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8']}})  2026-02-03 07:08:05.466847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:08:05.466862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e34e583', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-03 07:08:05.466885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:08:05.466897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-03 07:08:05.466917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a', 'dm-uuid-CRYPT-LUKS2-de4b76bf9af240aea6b34edbecd71396-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-03 07:08:05.717030 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:05.717118 | orchestrator | 2026-02-03 07:08:05.717150 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-03 07:08:05.717164 | orchestrator | Tuesday 03 February 2026 07:08:05 +0000 (0:00:01.542) 1:13:18.645 ****** 2026-02-03 07:08:05.717179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717216 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8', 'dm-uuid-LVM-Wbq8zZZmzC2gBNhxYxtVTvfLotN9I39ewfUHEKJIYaxWx1lem6PI2cmyC5FHw26a'], 'uuids': ['de4b76bf-9af2-40ae-a6b3-4edbecd71396'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a']}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717231 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308', 'scsi-SQEMU_QEMU_HARDDISK_1ed5f26b-b68f-43a9-951f-f4acae255308'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ed5f26b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717245 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fs9ehM-rHKw-gnft-ZAPg-F21u-3MhY-bxvv54', 'scsi-0QEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5', 'scsi-SQEMU_QEMU_HARDDISK_a2e14d93-a486-403c-9c37-4f6de49ddee5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb']}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717278 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717296 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-03-02-24-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717342 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca', 'dm-uuid-CRYPT-LUKS2-828a04c154134531b57bb1d5e612c63b-pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:05.717406 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9cbb71d1--90c1--5063--b304--f845b9e79bfb-osd--block--9cbb71d1--90c1--5063--b304--f845b9e79bfb', 'dm-uuid-LVM-mOPc0Zn7dvz2LW84SWB0gFMNdSnKuErspTdMvdsDAFIMSx8jpl0O46FJH5Fa8Xca'], 'uuids': ['828a04c1-5413-4531-b57b-b1d5e612c63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2e14d93', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['pTdMvd-sDAF-IMSx-8jpl-0O46-FJH5-Fa8Xca']}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:19.476545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XC0deN-vGzU-6Pu8-7l0p-bm5X-RdCc-NCjXuW', 'scsi-0QEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8', 'scsi-SQEMU_QEMU_HARDDISK_0bcbc917-1e4e-4947-8603-c7f49bd04ea8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0bcbc917', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--77c51d77--cdc1--5563--af81--33d9bc4e9bd8-osd--block--77c51d77--cdc1--5563--af81--33d9bc4e9bd8']}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:19.476839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:19.476864 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e34e583', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e34e583-c935-4574-8990-e89cac137457-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:19.476913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:19.476936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:19.476948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a', 'dm-uuid-CRYPT-LUKS2-de4b76bf9af240aea6b34edbecd71396-wfUHEK-JIYa-xWx1-lem6-PI2c-myC5-FHw26a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-03 07:08:19.476962 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:19.476974 | orchestrator | 2026-02-03 07:08:19.476986 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-03 07:08:19.476999 | orchestrator | Tuesday 03 February 2026 07:08:06 +0000 (0:00:01.535) 1:13:20.181 ****** 2026-02-03 07:08:19.477010 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:08:19.477023 | orchestrator | 2026-02-03 07:08:19.477034 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-03 07:08:19.477045 | orchestrator | Tuesday 03 February 2026 07:08:08 +0000 (0:00:01.530) 1:13:21.712 ****** 2026-02-03 07:08:19.477056 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:08:19.477067 | orchestrator | 2026-02-03 07:08:19.477077 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 07:08:19.477088 | orchestrator | Tuesday 03 February 2026 07:08:09 +0000 (0:00:01.311) 1:13:23.023 ****** 2026-02-03 07:08:19.477099 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:08:19.477110 | orchestrator | 2026-02-03 07:08:19.477121 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 07:08:19.477132 | orchestrator | Tuesday 03 February 2026 07:08:11 +0000 (0:00:01.551) 1:13:24.574 ****** 2026-02-03 07:08:19.477142 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:19.477153 | orchestrator | 2026-02-03 07:08:19.477164 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-03 07:08:19.477175 | orchestrator | Tuesday 03 February 2026 07:08:12 +0000 (0:00:01.294) 1:13:25.869 ****** 2026-02-03 07:08:19.477185 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:19.477196 | orchestrator | 2026-02-03 07:08:19.477207 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-03 07:08:19.477218 | orchestrator | Tuesday 03 February 2026 07:08:14 +0000 (0:00:01.367) 1:13:27.237 ****** 2026-02-03 07:08:19.477229 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:19.477240 | orchestrator | 2026-02-03 07:08:19.477251 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-03 07:08:19.477262 | orchestrator | Tuesday 03 February 2026 07:08:15 +0000 (0:00:01.259) 1:13:28.497 ****** 2026-02-03 07:08:19.477272 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-03 07:08:19.477284 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-03 07:08:19.477302 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-03 07:08:19.477313 | orchestrator | 2026-02-03 07:08:19.477324 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-03 07:08:19.477335 | orchestrator | Tuesday 03 February 2026 07:08:17 +0000 (0:00:01.813) 1:13:30.310 ****** 2026-02-03 07:08:19.477346 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-03 07:08:19.477366 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-03 07:08:19.477379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-03 07:08:19.477389 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:08:19.477400 | orchestrator | 2026-02-03 07:08:19.477411 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-03 07:08:19.477422 | orchestrator | Tuesday 03 February 2026 07:08:18 +0000 (0:00:01.183) 1:13:31.493 ****** 2026-02-03 07:08:19.477438 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-03 07:08:19.477450 | orchestrator | 2026-02-03 07:08:19.477468 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 07:09:05.324806 | orchestrator | Tuesday 03 February 2026 07:08:19 +0000 (0:00:01.157) 1:13:32.651 ****** 2026-02-03 07:09:05.324979 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.325000 | orchestrator | 2026-02-03 07:09:05.325013 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 07:09:05.325025 | orchestrator | Tuesday 03 February 2026 07:08:20 +0000 (0:00:01.241) 1:13:33.893 ****** 2026-02-03 07:09:05.325036 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.325047 | orchestrator | 2026-02-03 07:09:05.325059 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 07:09:05.325070 | orchestrator | Tuesday 03 February 2026 07:08:21 +0000 (0:00:01.214) 1:13:35.108 ****** 2026-02-03 07:09:05.325081 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.325092 | orchestrator | 2026-02-03 07:09:05.325103 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 07:09:05.325114 | orchestrator | Tuesday 03 February 2026 07:08:23 +0000 (0:00:01.234) 1:13:36.342 ****** 2026-02-03 07:09:05.325126 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.325138 | orchestrator | 2026-02-03 07:09:05.325160 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 07:09:05.325171 | orchestrator | Tuesday 03 February 2026 07:08:24 +0000 (0:00:01.265) 1:13:37.608 ****** 2026-02-03 07:09:05.325182 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 07:09:05.325195 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 07:09:05.325207 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 07:09:05.325218 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.325229 | orchestrator | 2026-02-03 07:09:05.325240 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 07:09:05.325251 | orchestrator | Tuesday 03 February 2026 07:08:26 +0000 (0:00:02.016) 1:13:39.624 ****** 2026-02-03 07:09:05.325262 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 07:09:05.325273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 07:09:05.325284 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 07:09:05.325297 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.325332 | orchestrator | 2026-02-03 07:09:05.325346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 07:09:05.325360 | orchestrator | Tuesday 03 February 2026 07:08:28 +0000 (0:00:01.937) 1:13:41.562 ****** 2026-02-03 07:09:05.325373 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 07:09:05.325387 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 07:09:05.325401 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 07:09:05.325438 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.325452 | orchestrator | 2026-02-03 07:09:05.325478 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 07:09:05.325491 | orchestrator | Tuesday 03 February 2026 07:08:30 +0000 (0:00:02.048) 1:13:43.611 ****** 2026-02-03 07:09:05.325504 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.325517 | orchestrator | 2026-02-03 07:09:05.325530 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 07:09:05.325542 | orchestrator | Tuesday 03 February 2026 07:08:31 +0000 (0:00:01.344) 1:13:44.955 ****** 2026-02-03 07:09:05.325555 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 07:09:05.325567 | orchestrator | 2026-02-03 07:09:05.325580 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-03 07:09:05.325593 | orchestrator | Tuesday 03 February 2026 07:08:33 +0000 (0:00:01.399) 1:13:46.354 ****** 2026-02-03 07:09:05.325606 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 07:09:05.325619 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 07:09:05.325632 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 07:09:05.325646 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 07:09:05.325657 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 07:09:05.325669 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-03 07:09:05.325705 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 07:09:05.325716 | orchestrator | 2026-02-03 07:09:05.325727 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-03 07:09:05.325738 | orchestrator | Tuesday 03 February 2026 07:08:35 +0000 (0:00:02.091) 1:13:48.446 ****** 2026-02-03 07:09:05.325749 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-03 07:09:05.325760 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-03 07:09:05.325771 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-03 07:09:05.325782 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-03 07:09:05.325793 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-03 07:09:05.325804 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-03 07:09:05.325815 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-03 07:09:05.325825 | orchestrator | 2026-02-03 07:09:05.325836 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-03 07:09:05.325864 | orchestrator | Tuesday 03 February 2026 07:08:37 +0000 (0:00:02.511) 1:13:50.958 ****** 2026-02-03 07:09:05.325875 | orchestrator | changed: [testbed-node-5] 2026-02-03 07:09:05.325886 | orchestrator | 2026-02-03 07:09:05.325914 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-03 07:09:05.325926 | orchestrator | Tuesday 03 February 2026 07:08:40 +0000 (0:00:02.289) 1:13:53.247 ****** 2026-02-03 07:09:05.325938 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 07:09:05.325950 | orchestrator | 2026-02-03 07:09:05.325961 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-03 07:09:05.325972 | orchestrator | Tuesday 03 February 2026 07:08:42 +0000 (0:00:02.873) 1:13:56.121 ****** 2026-02-03 07:09:05.325983 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 07:09:05.325994 | orchestrator | 2026-02-03 07:09:05.326005 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 07:09:05.326085 | orchestrator | Tuesday 03 February 2026 07:08:44 +0000 (0:00:01.960) 1:13:58.081 ****** 2026-02-03 07:09:05.326099 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-03 07:09:05.326111 | orchestrator | 2026-02-03 07:09:05.326122 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 07:09:05.326133 | orchestrator | Tuesday 03 February 2026 07:08:46 +0000 (0:00:01.244) 1:13:59.326 ****** 2026-02-03 07:09:05.326144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-03 07:09:05.326155 | orchestrator | 2026-02-03 07:09:05.326165 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 07:09:05.326176 | orchestrator | Tuesday 03 February 2026 07:08:47 +0000 (0:00:01.177) 1:14:00.504 ****** 2026-02-03 07:09:05.326187 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.326198 | orchestrator | 2026-02-03 07:09:05.326209 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 07:09:05.326220 | orchestrator | Tuesday 03 February 2026 07:08:48 +0000 (0:00:01.284) 1:14:01.789 ****** 2026-02-03 07:09:05.326231 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.326242 | orchestrator | 2026-02-03 07:09:05.326252 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 07:09:05.326263 | orchestrator | Tuesday 03 February 2026 07:08:50 +0000 (0:00:01.549) 1:14:03.339 ****** 2026-02-03 07:09:05.326274 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.326285 | orchestrator | 2026-02-03 07:09:05.326296 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 07:09:05.326307 | orchestrator | Tuesday 03 February 2026 07:08:51 +0000 (0:00:01.613) 1:14:04.952 ****** 2026-02-03 07:09:05.326318 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.326329 | orchestrator | 2026-02-03 07:09:05.326340 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 07:09:05.326351 | orchestrator | Tuesday 03 February 2026 07:08:53 +0000 (0:00:01.603) 1:14:06.556 ****** 2026-02-03 07:09:05.326362 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.326373 | orchestrator | 2026-02-03 07:09:05.326384 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 07:09:05.326395 | orchestrator | Tuesday 03 February 2026 07:08:54 +0000 (0:00:01.236) 1:14:07.793 ****** 2026-02-03 07:09:05.326406 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.326417 | orchestrator | 2026-02-03 07:09:05.326428 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 07:09:05.326439 | orchestrator | Tuesday 03 February 2026 07:08:55 +0000 (0:00:01.206) 1:14:08.999 ****** 2026-02-03 07:09:05.326450 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.326461 | orchestrator | 2026-02-03 07:09:05.326472 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 07:09:05.326482 | orchestrator | Tuesday 03 February 2026 07:08:56 +0000 (0:00:01.143) 1:14:10.142 ****** 2026-02-03 07:09:05.326493 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.326504 | orchestrator | 2026-02-03 07:09:05.326515 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 07:09:05.326526 | orchestrator | Tuesday 03 February 2026 07:08:58 +0000 (0:00:01.581) 1:14:11.724 ****** 2026-02-03 07:09:05.326537 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.326548 | orchestrator | 2026-02-03 07:09:05.326559 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 07:09:05.326569 | orchestrator | Tuesday 03 February 2026 07:09:00 +0000 (0:00:01.618) 1:14:13.343 ****** 2026-02-03 07:09:05.326580 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.326591 | orchestrator | 2026-02-03 07:09:05.326602 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 07:09:05.326613 | orchestrator | Tuesday 03 February 2026 07:09:01 +0000 (0:00:00.905) 1:14:14.249 ****** 2026-02-03 07:09:05.326624 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.326635 | orchestrator | 2026-02-03 07:09:05.326652 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 07:09:05.326663 | orchestrator | Tuesday 03 February 2026 07:09:01 +0000 (0:00:00.812) 1:14:15.061 ****** 2026-02-03 07:09:05.326704 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.326724 | orchestrator | 2026-02-03 07:09:05.326744 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 07:09:05.326762 | orchestrator | Tuesday 03 February 2026 07:09:02 +0000 (0:00:00.804) 1:14:15.865 ****** 2026-02-03 07:09:05.326778 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.326789 | orchestrator | 2026-02-03 07:09:05.326800 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 07:09:05.326811 | orchestrator | Tuesday 03 February 2026 07:09:03 +0000 (0:00:00.867) 1:14:16.733 ****** 2026-02-03 07:09:05.326822 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:05.326833 | orchestrator | 2026-02-03 07:09:05.326844 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 07:09:05.326864 | orchestrator | Tuesday 03 February 2026 07:09:04 +0000 (0:00:00.968) 1:14:17.702 ****** 2026-02-03 07:09:05.326875 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:05.326886 | orchestrator | 2026-02-03 07:09:05.326906 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 07:09:48.309098 | orchestrator | Tuesday 03 February 2026 07:09:05 +0000 (0:00:00.798) 1:14:18.500 ****** 2026-02-03 07:09:48.309205 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309220 | orchestrator | 2026-02-03 07:09:48.309232 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 07:09:48.309242 | orchestrator | Tuesday 03 February 2026 07:09:06 +0000 (0:00:00.820) 1:14:19.320 ****** 2026-02-03 07:09:48.309252 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309262 | orchestrator | 2026-02-03 07:09:48.309272 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 07:09:48.309282 | orchestrator | Tuesday 03 February 2026 07:09:06 +0000 (0:00:00.766) 1:14:20.087 ****** 2026-02-03 07:09:48.309293 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:48.309303 | orchestrator | 2026-02-03 07:09:48.309313 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 07:09:48.309323 | orchestrator | Tuesday 03 February 2026 07:09:07 +0000 (0:00:00.897) 1:14:20.985 ****** 2026-02-03 07:09:48.309333 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:48.309342 | orchestrator | 2026-02-03 07:09:48.309352 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-03 07:09:48.309362 | orchestrator | Tuesday 03 February 2026 07:09:08 +0000 (0:00:00.881) 1:14:21.867 ****** 2026-02-03 07:09:48.309371 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309381 | orchestrator | 2026-02-03 07:09:48.309391 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-03 07:09:48.309400 | orchestrator | Tuesday 03 February 2026 07:09:09 +0000 (0:00:00.812) 1:14:22.679 ****** 2026-02-03 07:09:48.309410 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309424 | orchestrator | 2026-02-03 07:09:48.309440 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-03 07:09:48.309457 | orchestrator | Tuesday 03 February 2026 07:09:10 +0000 (0:00:00.849) 1:14:23.529 ****** 2026-02-03 07:09:48.309472 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309489 | orchestrator | 2026-02-03 07:09:48.309505 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-03 07:09:48.309519 | orchestrator | Tuesday 03 February 2026 07:09:11 +0000 (0:00:00.814) 1:14:24.344 ****** 2026-02-03 07:09:48.309535 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309550 | orchestrator | 2026-02-03 07:09:48.309566 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-03 07:09:48.309582 | orchestrator | Tuesday 03 February 2026 07:09:11 +0000 (0:00:00.825) 1:14:25.170 ****** 2026-02-03 07:09:48.309596 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309611 | orchestrator | 2026-02-03 07:09:48.309699 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-03 07:09:48.309722 | orchestrator | Tuesday 03 February 2026 07:09:12 +0000 (0:00:00.839) 1:14:26.009 ****** 2026-02-03 07:09:48.309739 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309756 | orchestrator | 2026-02-03 07:09:48.309775 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-03 07:09:48.309792 | orchestrator | Tuesday 03 February 2026 07:09:13 +0000 (0:00:00.773) 1:14:26.783 ****** 2026-02-03 07:09:48.309809 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309821 | orchestrator | 2026-02-03 07:09:48.309833 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-03 07:09:48.309846 | orchestrator | Tuesday 03 February 2026 07:09:14 +0000 (0:00:00.924) 1:14:27.708 ****** 2026-02-03 07:09:48.309858 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309869 | orchestrator | 2026-02-03 07:09:48.309880 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-03 07:09:48.309892 | orchestrator | Tuesday 03 February 2026 07:09:15 +0000 (0:00:00.973) 1:14:28.682 ****** 2026-02-03 07:09:48.309903 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309914 | orchestrator | 2026-02-03 07:09:48.309926 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-03 07:09:48.309938 | orchestrator | Tuesday 03 February 2026 07:09:16 +0000 (0:00:00.799) 1:14:29.481 ****** 2026-02-03 07:09:48.309949 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.309960 | orchestrator | 2026-02-03 07:09:48.309972 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-03 07:09:48.309983 | orchestrator | Tuesday 03 February 2026 07:09:17 +0000 (0:00:00.788) 1:14:30.270 ****** 2026-02-03 07:09:48.309995 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310006 | orchestrator | 2026-02-03 07:09:48.310074 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-03 07:09:48.310087 | orchestrator | Tuesday 03 February 2026 07:09:17 +0000 (0:00:00.819) 1:14:31.091 ****** 2026-02-03 07:09:48.310097 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310107 | orchestrator | 2026-02-03 07:09:48.310117 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-03 07:09:48.310126 | orchestrator | Tuesday 03 February 2026 07:09:18 +0000 (0:00:00.808) 1:14:31.899 ****** 2026-02-03 07:09:48.310136 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:48.310146 | orchestrator | 2026-02-03 07:09:48.310155 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-03 07:09:48.310165 | orchestrator | Tuesday 03 February 2026 07:09:20 +0000 (0:00:01.710) 1:14:33.609 ****** 2026-02-03 07:09:48.310175 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:48.310184 | orchestrator | 2026-02-03 07:09:48.310194 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-03 07:09:48.310204 | orchestrator | Tuesday 03 February 2026 07:09:22 +0000 (0:00:01.934) 1:14:35.543 ****** 2026-02-03 07:09:48.310213 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-03 07:09:48.310224 | orchestrator | 2026-02-03 07:09:48.310234 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-03 07:09:48.310259 | orchestrator | Tuesday 03 February 2026 07:09:23 +0000 (0:00:01.203) 1:14:36.747 ****** 2026-02-03 07:09:48.310269 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310279 | orchestrator | 2026-02-03 07:09:48.310289 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-03 07:09:48.310320 | orchestrator | Tuesday 03 February 2026 07:09:24 +0000 (0:00:01.168) 1:14:37.916 ****** 2026-02-03 07:09:48.310331 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310341 | orchestrator | 2026-02-03 07:09:48.310351 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-03 07:09:48.310360 | orchestrator | Tuesday 03 February 2026 07:09:25 +0000 (0:00:01.261) 1:14:39.178 ****** 2026-02-03 07:09:48.310381 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-03 07:09:48.310391 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-03 07:09:48.310400 | orchestrator | 2026-02-03 07:09:48.310410 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-03 07:09:48.310420 | orchestrator | Tuesday 03 February 2026 07:09:27 +0000 (0:00:01.888) 1:14:41.067 ****** 2026-02-03 07:09:48.310429 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:48.310439 | orchestrator | 2026-02-03 07:09:48.310448 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-03 07:09:48.310458 | orchestrator | Tuesday 03 February 2026 07:09:29 +0000 (0:00:01.609) 1:14:42.677 ****** 2026-02-03 07:09:48.310468 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310477 | orchestrator | 2026-02-03 07:09:48.310495 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-03 07:09:48.310512 | orchestrator | Tuesday 03 February 2026 07:09:30 +0000 (0:00:01.218) 1:14:43.895 ****** 2026-02-03 07:09:48.310530 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310547 | orchestrator | 2026-02-03 07:09:48.310563 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-03 07:09:48.310578 | orchestrator | Tuesday 03 February 2026 07:09:31 +0000 (0:00:00.932) 1:14:44.828 ****** 2026-02-03 07:09:48.310595 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310613 | orchestrator | 2026-02-03 07:09:48.310629 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-03 07:09:48.310644 | orchestrator | Tuesday 03 February 2026 07:09:32 +0000 (0:00:00.855) 1:14:45.683 ****** 2026-02-03 07:09:48.310684 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-03 07:09:48.310700 | orchestrator | 2026-02-03 07:09:48.310716 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-03 07:09:48.310731 | orchestrator | Tuesday 03 February 2026 07:09:33 +0000 (0:00:01.177) 1:14:46.861 ****** 2026-02-03 07:09:48.310747 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:48.310762 | orchestrator | 2026-02-03 07:09:48.310778 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-03 07:09:48.310794 | orchestrator | Tuesday 03 February 2026 07:09:35 +0000 (0:00:01.818) 1:14:48.680 ****** 2026-02-03 07:09:48.310809 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-03 07:09:48.310825 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-03 07:09:48.310841 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-03 07:09:48.310856 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310871 | orchestrator | 2026-02-03 07:09:48.310888 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-03 07:09:48.310904 | orchestrator | Tuesday 03 February 2026 07:09:36 +0000 (0:00:01.194) 1:14:49.874 ****** 2026-02-03 07:09:48.310921 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310938 | orchestrator | 2026-02-03 07:09:48.310954 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-03 07:09:48.310967 | orchestrator | Tuesday 03 February 2026 07:09:37 +0000 (0:00:01.244) 1:14:51.119 ****** 2026-02-03 07:09:48.310983 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.310999 | orchestrator | 2026-02-03 07:09:48.311016 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-03 07:09:48.311032 | orchestrator | Tuesday 03 February 2026 07:09:39 +0000 (0:00:01.225) 1:14:52.344 ****** 2026-02-03 07:09:48.311048 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.311065 | orchestrator | 2026-02-03 07:09:48.311082 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-03 07:09:48.311099 | orchestrator | Tuesday 03 February 2026 07:09:40 +0000 (0:00:01.194) 1:14:53.539 ****** 2026-02-03 07:09:48.311116 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.311146 | orchestrator | 2026-02-03 07:09:48.311162 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-03 07:09:48.311178 | orchestrator | Tuesday 03 February 2026 07:09:41 +0000 (0:00:01.209) 1:14:54.748 ****** 2026-02-03 07:09:48.311193 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.311209 | orchestrator | 2026-02-03 07:09:48.311225 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-03 07:09:48.311238 | orchestrator | Tuesday 03 February 2026 07:09:42 +0000 (0:00:00.836) 1:14:55.585 ****** 2026-02-03 07:09:48.311248 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:48.311257 | orchestrator | 2026-02-03 07:09:48.311267 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-03 07:09:48.311277 | orchestrator | Tuesday 03 February 2026 07:09:44 +0000 (0:00:02.429) 1:14:58.015 ****** 2026-02-03 07:09:48.311287 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:09:48.311296 | orchestrator | 2026-02-03 07:09:48.311306 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-03 07:09:48.311315 | orchestrator | Tuesday 03 February 2026 07:09:45 +0000 (0:00:00.980) 1:14:58.995 ****** 2026-02-03 07:09:48.311325 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-03 07:09:48.311335 | orchestrator | 2026-02-03 07:09:48.311344 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-03 07:09:48.311364 | orchestrator | Tuesday 03 February 2026 07:09:47 +0000 (0:00:01.250) 1:15:00.245 ****** 2026-02-03 07:09:48.311381 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:09:48.311398 | orchestrator | 2026-02-03 07:09:48.311414 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-03 07:09:48.311447 | orchestrator | Tuesday 03 February 2026 07:09:48 +0000 (0:00:01.236) 1:15:01.482 ****** 2026-02-03 07:10:32.147935 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.148052 | orchestrator | 2026-02-03 07:10:32.148068 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-03 07:10:32.148082 | orchestrator | Tuesday 03 February 2026 07:09:49 +0000 (0:00:01.225) 1:15:02.707 ****** 2026-02-03 07:10:32.148093 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.148104 | orchestrator | 2026-02-03 07:10:32.148115 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-03 07:10:32.148126 | orchestrator | Tuesday 03 February 2026 07:09:50 +0000 (0:00:01.257) 1:15:03.965 ****** 2026-02-03 07:10:32.148137 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.148148 | orchestrator | 2026-02-03 07:10:32.148159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-03 07:10:32.148170 | orchestrator | Tuesday 03 February 2026 07:09:51 +0000 (0:00:01.170) 1:15:05.135 ****** 2026-02-03 07:10:32.148180 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.148191 | orchestrator | 2026-02-03 07:10:32.148202 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-03 07:10:32.148213 | orchestrator | Tuesday 03 February 2026 07:09:53 +0000 (0:00:01.226) 1:15:06.361 ****** 2026-02-03 07:10:32.148224 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.148234 | orchestrator | 2026-02-03 07:10:32.148245 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-03 07:10:32.148256 | orchestrator | Tuesday 03 February 2026 07:09:54 +0000 (0:00:01.190) 1:15:07.552 ****** 2026-02-03 07:10:32.148267 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.148278 | orchestrator | 2026-02-03 07:10:32.148289 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-03 07:10:32.148300 | orchestrator | Tuesday 03 February 2026 07:09:55 +0000 (0:00:01.205) 1:15:08.757 ****** 2026-02-03 07:10:32.148310 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.148321 | orchestrator | 2026-02-03 07:10:32.148332 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-03 07:10:32.148343 | orchestrator | Tuesday 03 February 2026 07:09:56 +0000 (0:00:01.268) 1:15:10.026 ****** 2026-02-03 07:10:32.148408 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:10:32.148421 | orchestrator | 2026-02-03 07:10:32.148432 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-03 07:10:32.148443 | orchestrator | Tuesday 03 February 2026 07:09:57 +0000 (0:00:01.046) 1:15:11.072 ****** 2026-02-03 07:10:32.148454 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-03 07:10:32.148466 | orchestrator | 2026-02-03 07:10:32.148479 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-03 07:10:32.148492 | orchestrator | Tuesday 03 February 2026 07:09:59 +0000 (0:00:01.179) 1:15:12.252 ****** 2026-02-03 07:10:32.148505 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-03 07:10:32.148518 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-03 07:10:32.148530 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-03 07:10:32.148542 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-03 07:10:32.148554 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-03 07:10:32.148566 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-03 07:10:32.148578 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-03 07:10:32.148590 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-03 07:10:32.148603 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-03 07:10:32.148616 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-03 07:10:32.148628 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-03 07:10:32.148667 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-03 07:10:32.148681 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-03 07:10:32.148694 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-03 07:10:32.148706 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-03 07:10:32.148718 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-03 07:10:32.148731 | orchestrator | 2026-02-03 07:10:32.148743 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-03 07:10:32.148755 | orchestrator | Tuesday 03 February 2026 07:10:05 +0000 (0:00:06.666) 1:15:18.919 ****** 2026-02-03 07:10:32.148767 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-03 07:10:32.148780 | orchestrator | 2026-02-03 07:10:32.148792 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-03 07:10:32.148804 | orchestrator | Tuesday 03 February 2026 07:10:06 +0000 (0:00:01.190) 1:15:20.109 ****** 2026-02-03 07:10:32.148817 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 07:10:32.148831 | orchestrator | 2026-02-03 07:10:32.148845 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-03 07:10:32.148858 | orchestrator | Tuesday 03 February 2026 07:10:08 +0000 (0:00:01.573) 1:15:21.682 ****** 2026-02-03 07:10:32.148869 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 07:10:32.148880 | orchestrator | 2026-02-03 07:10:32.148891 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-03 07:10:32.148916 | orchestrator | Tuesday 03 February 2026 07:10:10 +0000 (0:00:01.687) 1:15:23.369 ****** 2026-02-03 07:10:32.148927 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.148938 | orchestrator | 2026-02-03 07:10:32.148949 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-03 07:10:32.148976 | orchestrator | Tuesday 03 February 2026 07:10:11 +0000 (0:00:00.822) 1:15:24.192 ****** 2026-02-03 07:10:32.148988 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.148999 | orchestrator | 2026-02-03 07:10:32.149010 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-03 07:10:32.149029 | orchestrator | Tuesday 03 February 2026 07:10:11 +0000 (0:00:00.859) 1:15:25.051 ****** 2026-02-03 07:10:32.149040 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149051 | orchestrator | 2026-02-03 07:10:32.149062 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-03 07:10:32.149072 | orchestrator | Tuesday 03 February 2026 07:10:12 +0000 (0:00:00.827) 1:15:25.879 ****** 2026-02-03 07:10:32.149083 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149094 | orchestrator | 2026-02-03 07:10:32.149105 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-03 07:10:32.149115 | orchestrator | Tuesday 03 February 2026 07:10:13 +0000 (0:00:00.826) 1:15:26.706 ****** 2026-02-03 07:10:32.149126 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149137 | orchestrator | 2026-02-03 07:10:32.149148 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-03 07:10:32.149159 | orchestrator | Tuesday 03 February 2026 07:10:14 +0000 (0:00:00.824) 1:15:27.530 ****** 2026-02-03 07:10:32.149170 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149180 | orchestrator | 2026-02-03 07:10:32.149191 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-03 07:10:32.149202 | orchestrator | Tuesday 03 February 2026 07:10:15 +0000 (0:00:00.798) 1:15:28.329 ****** 2026-02-03 07:10:32.149213 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149223 | orchestrator | 2026-02-03 07:10:32.149234 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-03 07:10:32.149245 | orchestrator | Tuesday 03 February 2026 07:10:16 +0000 (0:00:01.150) 1:15:29.479 ****** 2026-02-03 07:10:32.149256 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149267 | orchestrator | 2026-02-03 07:10:32.149277 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-03 07:10:32.149288 | orchestrator | Tuesday 03 February 2026 07:10:17 +0000 (0:00:00.801) 1:15:30.280 ****** 2026-02-03 07:10:32.149299 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149310 | orchestrator | 2026-02-03 07:10:32.149321 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-03 07:10:32.149331 | orchestrator | Tuesday 03 February 2026 07:10:17 +0000 (0:00:00.807) 1:15:31.088 ****** 2026-02-03 07:10:32.149342 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149353 | orchestrator | 2026-02-03 07:10:32.149364 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-03 07:10:32.149375 | orchestrator | Tuesday 03 February 2026 07:10:18 +0000 (0:00:00.885) 1:15:31.973 ****** 2026-02-03 07:10:32.149386 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149397 | orchestrator | 2026-02-03 07:10:32.149408 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-03 07:10:32.149419 | orchestrator | Tuesday 03 February 2026 07:10:19 +0000 (0:00:00.917) 1:15:32.891 ****** 2026-02-03 07:10:32.149429 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-03 07:10:32.149440 | orchestrator | 2026-02-03 07:10:32.149451 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-03 07:10:32.149461 | orchestrator | Tuesday 03 February 2026 07:10:23 +0000 (0:00:04.179) 1:15:37.071 ****** 2026-02-03 07:10:32.149472 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 07:10:32.149483 | orchestrator | 2026-02-03 07:10:32.149494 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-03 07:10:32.149505 | orchestrator | Tuesday 03 February 2026 07:10:24 +0000 (0:00:00.877) 1:15:37.948 ****** 2026-02-03 07:10:32.149518 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-03 07:10:32.149540 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-03 07:10:32.149553 | orchestrator | 2026-02-03 07:10:32.149564 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-03 07:10:32.149575 | orchestrator | Tuesday 03 February 2026 07:10:29 +0000 (0:00:04.717) 1:15:42.665 ****** 2026-02-03 07:10:32.149585 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149596 | orchestrator | 2026-02-03 07:10:32.149607 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-03 07:10:32.149618 | orchestrator | Tuesday 03 February 2026 07:10:30 +0000 (0:00:00.856) 1:15:43.522 ****** 2026-02-03 07:10:32.149629 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149665 | orchestrator | 2026-02-03 07:10:32.149683 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-03 07:10:32.149694 | orchestrator | Tuesday 03 February 2026 07:10:31 +0000 (0:00:00.912) 1:15:44.435 ****** 2026-02-03 07:10:32.149705 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:10:32.149715 | orchestrator | 2026-02-03 07:10:32.149726 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-03 07:10:32.149744 | orchestrator | Tuesday 03 February 2026 07:10:32 +0000 (0:00:00.884) 1:15:45.319 ****** 2026-02-03 07:11:42.779178 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:11:42.779291 | orchestrator | 2026-02-03 07:11:42.779307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-03 07:11:42.779321 | orchestrator | Tuesday 03 February 2026 07:10:33 +0000 (0:00:00.922) 1:15:46.241 ****** 2026-02-03 07:11:42.779332 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:11:42.779343 | orchestrator | 2026-02-03 07:11:42.779354 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-03 07:11:42.779365 | orchestrator | Tuesday 03 February 2026 07:10:33 +0000 (0:00:00.844) 1:15:47.086 ****** 2026-02-03 07:11:42.779376 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:11:42.779388 | orchestrator | 2026-02-03 07:11:42.779399 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-03 07:11:42.779410 | orchestrator | Tuesday 03 February 2026 07:10:34 +0000 (0:00:01.064) 1:15:48.151 ****** 2026-02-03 07:11:42.779421 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 07:11:42.779432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 07:11:42.779443 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 07:11:42.779454 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:11:42.779465 | orchestrator | 2026-02-03 07:11:42.779476 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-03 07:11:42.779487 | orchestrator | Tuesday 03 February 2026 07:10:36 +0000 (0:00:01.148) 1:15:49.299 ****** 2026-02-03 07:11:42.779498 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 07:11:42.779509 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 07:11:42.779520 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 07:11:42.779530 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:11:42.779541 | orchestrator | 2026-02-03 07:11:42.779552 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-03 07:11:42.779563 | orchestrator | Tuesday 03 February 2026 07:10:37 +0000 (0:00:01.133) 1:15:50.433 ****** 2026-02-03 07:11:42.779574 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-03 07:11:42.779585 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-03 07:11:42.779671 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-03 07:11:42.779684 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:11:42.779695 | orchestrator | 2026-02-03 07:11:42.779706 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-03 07:11:42.779720 | orchestrator | Tuesday 03 February 2026 07:10:38 +0000 (0:00:01.128) 1:15:51.561 ****** 2026-02-03 07:11:42.779733 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:11:42.779746 | orchestrator | 2026-02-03 07:11:42.779759 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-03 07:11:42.779771 | orchestrator | Tuesday 03 February 2026 07:10:39 +0000 (0:00:00.856) 1:15:52.417 ****** 2026-02-03 07:11:42.779785 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-03 07:11:42.779797 | orchestrator | 2026-02-03 07:11:42.779810 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-03 07:11:42.779822 | orchestrator | Tuesday 03 February 2026 07:10:40 +0000 (0:00:01.084) 1:15:53.502 ****** 2026-02-03 07:11:42.779835 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:11:42.779848 | orchestrator | 2026-02-03 07:11:42.779861 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-03 07:11:42.779874 | orchestrator | Tuesday 03 February 2026 07:10:41 +0000 (0:00:01.522) 1:15:55.024 ****** 2026-02-03 07:11:42.779886 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-03 07:11:42.779897 | orchestrator | 2026-02-03 07:11:42.779910 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-03 07:11:42.779922 | orchestrator | Tuesday 03 February 2026 07:10:42 +0000 (0:00:01.150) 1:15:56.175 ****** 2026-02-03 07:11:42.779935 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 07:11:42.779947 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-03 07:11:42.779960 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 07:11:42.779973 | orchestrator | 2026-02-03 07:11:42.779986 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-03 07:11:42.779999 | orchestrator | Tuesday 03 February 2026 07:10:46 +0000 (0:00:03.325) 1:15:59.500 ****** 2026-02-03 07:11:42.780012 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-03 07:11:42.780025 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-03 07:11:42.780037 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:11:42.780050 | orchestrator | 2026-02-03 07:11:42.780063 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-03 07:11:42.780075 | orchestrator | Tuesday 03 February 2026 07:10:48 +0000 (0:00:02.205) 1:16:01.706 ****** 2026-02-03 07:11:42.780086 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:11:42.780096 | orchestrator | 2026-02-03 07:11:42.780107 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-03 07:11:42.780118 | orchestrator | Tuesday 03 February 2026 07:10:49 +0000 (0:00:00.931) 1:16:02.637 ****** 2026-02-03 07:11:42.780129 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-03 07:11:42.780141 | orchestrator | 2026-02-03 07:11:42.780151 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-03 07:11:42.780162 | orchestrator | Tuesday 03 February 2026 07:10:50 +0000 (0:00:01.243) 1:16:03.881 ****** 2026-02-03 07:11:42.780188 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 07:11:42.780200 | orchestrator | 2026-02-03 07:11:42.780211 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-03 07:11:42.780222 | orchestrator | Tuesday 03 February 2026 07:10:52 +0000 (0:00:01.673) 1:16:05.555 ****** 2026-02-03 07:11:42.780250 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 07:11:42.780263 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-03 07:11:42.780282 | orchestrator | 2026-02-03 07:11:42.780293 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-03 07:11:42.780304 | orchestrator | Tuesday 03 February 2026 07:10:57 +0000 (0:00:05.429) 1:16:10.984 ****** 2026-02-03 07:11:42.780315 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-03 07:11:42.780325 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-03 07:11:42.780336 | orchestrator | 2026-02-03 07:11:42.780347 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-03 07:11:42.780358 | orchestrator | Tuesday 03 February 2026 07:11:01 +0000 (0:00:03.288) 1:16:14.273 ****** 2026-02-03 07:11:42.780369 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-03 07:11:42.780380 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:11:42.780391 | orchestrator | 2026-02-03 07:11:42.780402 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-03 07:11:42.780412 | orchestrator | Tuesday 03 February 2026 07:11:02 +0000 (0:00:01.690) 1:16:15.963 ****** 2026-02-03 07:11:42.780423 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-03 07:11:42.780434 | orchestrator | 2026-02-03 07:11:42.780445 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-03 07:11:42.780456 | orchestrator | Tuesday 03 February 2026 07:11:03 +0000 (0:00:01.198) 1:16:17.161 ****** 2026-02-03 07:11:42.780467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780522 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:11:42.780533 | orchestrator | 2026-02-03 07:11:42.780544 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-03 07:11:42.780555 | orchestrator | Tuesday 03 February 2026 07:11:06 +0000 (0:00:02.190) 1:16:19.352 ****** 2026-02-03 07:11:42.780566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-03 07:11:42.780642 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:11:42.780653 | orchestrator | 2026-02-03 07:11:42.780664 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-03 07:11:42.780675 | orchestrator | Tuesday 03 February 2026 07:11:08 +0000 (0:00:02.142) 1:16:21.494 ****** 2026-02-03 07:11:42.780686 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:11:42.780697 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:11:42.780717 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:11:42.780728 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:11:42.780739 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-03 07:11:42.780750 | orchestrator | 2026-02-03 07:11:42.780766 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-03 07:11:42.780777 | orchestrator | Tuesday 03 February 2026 07:11:41 +0000 (0:00:33.649) 1:16:55.144 ****** 2026-02-03 07:11:42.780788 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:11:42.780799 | orchestrator | 2026-02-03 07:11:42.780810 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-03 07:11:42.780827 | orchestrator | Tuesday 03 February 2026 07:11:42 +0000 (0:00:00.811) 1:16:55.956 ****** 2026-02-03 07:12:41.446250 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:12:41.446334 | orchestrator | 2026-02-03 07:12:41.446346 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-03 07:12:41.446355 | orchestrator | Tuesday 03 February 2026 07:11:43 +0000 (0:00:00.923) 1:16:56.879 ****** 2026-02-03 07:12:41.446362 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-03 07:12:41.446371 | orchestrator | 2026-02-03 07:12:41.446378 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-03 07:12:41.446385 | orchestrator | Tuesday 03 February 2026 07:11:44 +0000 (0:00:01.168) 1:16:58.048 ****** 2026-02-03 07:12:41.446394 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-03 07:12:41.446400 | orchestrator | 2026-02-03 07:12:41.446404 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-03 07:12:41.446409 | orchestrator | Tuesday 03 February 2026 07:11:46 +0000 (0:00:01.181) 1:16:59.229 ****** 2026-02-03 07:12:41.446413 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.446418 | orchestrator | 2026-02-03 07:12:41.446422 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-03 07:12:41.446426 | orchestrator | Tuesday 03 February 2026 07:11:48 +0000 (0:00:02.077) 1:17:01.307 ****** 2026-02-03 07:12:41.446430 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.446434 | orchestrator | 2026-02-03 07:12:41.446438 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-03 07:12:41.446443 | orchestrator | Tuesday 03 February 2026 07:11:50 +0000 (0:00:02.004) 1:17:03.311 ****** 2026-02-03 07:12:41.446446 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.446450 | orchestrator | 2026-02-03 07:12:41.446454 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-03 07:12:41.446459 | orchestrator | Tuesday 03 February 2026 07:11:52 +0000 (0:00:02.469) 1:17:05.781 ****** 2026-02-03 07:12:41.446464 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-03 07:12:41.446469 | orchestrator | 2026-02-03 07:12:41.446473 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-03 07:12:41.446477 | orchestrator | skipping: no hosts matched 2026-02-03 07:12:41.446481 | orchestrator | 2026-02-03 07:12:41.446484 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-03 07:12:41.446490 | orchestrator | skipping: no hosts matched 2026-02-03 07:12:41.446496 | orchestrator | 2026-02-03 07:12:41.446502 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-03 07:12:41.446508 | orchestrator | skipping: no hosts matched 2026-02-03 07:12:41.446514 | orchestrator | 2026-02-03 07:12:41.446520 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-03 07:12:41.446545 | orchestrator | 2026-02-03 07:12:41.446552 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-03 07:12:41.446559 | orchestrator | Tuesday 03 February 2026 07:11:57 +0000 (0:00:04.716) 1:17:10.497 ****** 2026-02-03 07:12:41.446565 | orchestrator | changed: [testbed-node-0] 2026-02-03 07:12:41.446571 | orchestrator | changed: [testbed-node-1] 2026-02-03 07:12:41.446575 | orchestrator | changed: [testbed-node-2] 2026-02-03 07:12:41.446579 | orchestrator | changed: [testbed-node-3] 2026-02-03 07:12:41.446582 | orchestrator | changed: [testbed-node-4] 2026-02-03 07:12:41.446586 | orchestrator | changed: [testbed-node-5] 2026-02-03 07:12:41.446620 | orchestrator | 2026-02-03 07:12:41.446625 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-03 07:12:41.446629 | orchestrator | Tuesday 03 February 2026 07:12:00 +0000 (0:00:03.105) 1:17:13.603 ****** 2026-02-03 07:12:41.446633 | orchestrator | changed: [testbed-node-0] 2026-02-03 07:12:41.446637 | orchestrator | changed: [testbed-node-3] 2026-02-03 07:12:41.446641 | orchestrator | changed: [testbed-node-2] 2026-02-03 07:12:41.446645 | orchestrator | changed: [testbed-node-4] 2026-02-03 07:12:41.446650 | orchestrator | changed: [testbed-node-5] 2026-02-03 07:12:41.446657 | orchestrator | changed: [testbed-node-1] 2026-02-03 07:12:41.446663 | orchestrator | 2026-02-03 07:12:41.446670 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 07:12:41.446677 | orchestrator | Tuesday 03 February 2026 07:12:04 +0000 (0:00:04.320) 1:17:17.923 ****** 2026-02-03 07:12:41.446684 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:12:41.446691 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:12:41.446697 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:12:41.446705 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:12:41.446712 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:12:41.446719 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.446726 | orchestrator | 2026-02-03 07:12:41.446731 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 07:12:41.446736 | orchestrator | Tuesday 03 February 2026 07:12:07 +0000 (0:00:02.340) 1:17:20.263 ****** 2026-02-03 07:12:41.446739 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:12:41.446743 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:12:41.446747 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:12:41.446751 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:12:41.446755 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:12:41.446759 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.446763 | orchestrator | 2026-02-03 07:12:41.446767 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-03 07:12:41.446771 | orchestrator | Tuesday 03 February 2026 07:12:09 +0000 (0:00:02.409) 1:17:22.673 ****** 2026-02-03 07:12:41.446776 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 07:12:41.446781 | orchestrator | 2026-02-03 07:12:41.446797 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-03 07:12:41.446801 | orchestrator | Tuesday 03 February 2026 07:12:11 +0000 (0:00:02.488) 1:17:25.162 ****** 2026-02-03 07:12:41.446805 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 07:12:41.446809 | orchestrator | 2026-02-03 07:12:41.446825 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-03 07:12:41.446830 | orchestrator | Tuesday 03 February 2026 07:12:14 +0000 (0:00:02.734) 1:17:27.896 ****** 2026-02-03 07:12:41.446835 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:12:41.446839 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:12:41.446844 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:12:41.446848 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:12:41.446853 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:12:41.446862 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:12:41.446867 | orchestrator | 2026-02-03 07:12:41.446872 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-03 07:12:41.446876 | orchestrator | Tuesday 03 February 2026 07:12:17 +0000 (0:00:02.326) 1:17:30.222 ****** 2026-02-03 07:12:41.446881 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:12:41.446886 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:12:41.446890 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:12:41.446895 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:12:41.446900 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:12:41.446907 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.446912 | orchestrator | 2026-02-03 07:12:41.446916 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-03 07:12:41.446921 | orchestrator | Tuesday 03 February 2026 07:12:19 +0000 (0:00:02.833) 1:17:33.056 ****** 2026-02-03 07:12:41.446925 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:12:41.446930 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:12:41.446934 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:12:41.446939 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:12:41.446945 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:12:41.446952 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.446958 | orchestrator | 2026-02-03 07:12:41.446966 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-03 07:12:41.446973 | orchestrator | Tuesday 03 February 2026 07:12:22 +0000 (0:00:02.467) 1:17:35.524 ****** 2026-02-03 07:12:41.446979 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:12:41.446985 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:12:41.446992 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:12:41.446998 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:12:41.447003 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:12:41.447007 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.447012 | orchestrator | 2026-02-03 07:12:41.447016 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-03 07:12:41.447020 | orchestrator | Tuesday 03 February 2026 07:12:25 +0000 (0:00:02.718) 1:17:38.242 ****** 2026-02-03 07:12:41.447025 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:12:41.447029 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:12:41.447034 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:12:41.447038 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:12:41.447043 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:12:41.447047 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:12:41.447051 | orchestrator | 2026-02-03 07:12:41.447055 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-03 07:12:41.447060 | orchestrator | Tuesday 03 February 2026 07:12:27 +0000 (0:00:02.436) 1:17:40.679 ****** 2026-02-03 07:12:41.447064 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:12:41.447069 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:12:41.447073 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:12:41.447078 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:12:41.447082 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:12:41.447087 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:12:41.447091 | orchestrator | 2026-02-03 07:12:41.447095 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-03 07:12:41.447100 | orchestrator | Tuesday 03 February 2026 07:12:29 +0000 (0:00:02.170) 1:17:42.850 ****** 2026-02-03 07:12:41.447104 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:12:41.447108 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:12:41.447113 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:12:41.447117 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:12:41.447121 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:12:41.447126 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:12:41.447130 | orchestrator | 2026-02-03 07:12:41.447135 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-03 07:12:41.447139 | orchestrator | Tuesday 03 February 2026 07:12:32 +0000 (0:00:02.419) 1:17:45.269 ****** 2026-02-03 07:12:41.447147 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:12:41.447152 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:12:41.447156 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:12:41.447161 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:12:41.447165 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:12:41.447170 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.447174 | orchestrator | 2026-02-03 07:12:41.447179 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-03 07:12:41.447183 | orchestrator | Tuesday 03 February 2026 07:12:34 +0000 (0:00:02.262) 1:17:47.532 ****** 2026-02-03 07:12:41.447188 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:12:41.447195 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:12:41.447202 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:12:41.447206 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:12:41.447209 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:12:41.447213 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:12:41.447217 | orchestrator | 2026-02-03 07:12:41.447221 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-03 07:12:41.447225 | orchestrator | Tuesday 03 February 2026 07:12:37 +0000 (0:00:02.884) 1:17:50.416 ****** 2026-02-03 07:12:41.447229 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:12:41.447233 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:12:41.447237 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:12:41.447241 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:12:41.447245 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:12:41.447249 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:12:41.447252 | orchestrator | 2026-02-03 07:12:41.447260 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-03 07:12:41.447264 | orchestrator | Tuesday 03 February 2026 07:12:39 +0000 (0:00:01.988) 1:17:52.405 ****** 2026-02-03 07:12:41.447268 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:12:41.447271 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:12:41.447275 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:12:41.447279 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:12:41.447283 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:12:41.447287 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:12:41.447291 | orchestrator | 2026-02-03 07:12:41.447298 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-03 07:13:40.579323 | orchestrator | Tuesday 03 February 2026 07:12:41 +0000 (0:00:02.210) 1:17:54.615 ****** 2026-02-03 07:13:40.579426 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.579437 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.579443 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.579450 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:13:40.579458 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:13:40.579464 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:13:40.579471 | orchestrator | 2026-02-03 07:13:40.579478 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-03 07:13:40.579485 | orchestrator | Tuesday 03 February 2026 07:12:43 +0000 (0:00:01.880) 1:17:56.496 ****** 2026-02-03 07:13:40.579491 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.579498 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.579505 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.579511 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:13:40.579517 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:13:40.579523 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:13:40.579529 | orchestrator | 2026-02-03 07:13:40.579536 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-03 07:13:40.579543 | orchestrator | Tuesday 03 February 2026 07:12:45 +0000 (0:00:02.335) 1:17:58.831 ****** 2026-02-03 07:13:40.579550 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.579556 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.579562 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.579569 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:13:40.579637 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:13:40.579645 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:13:40.579651 | orchestrator | 2026-02-03 07:13:40.579658 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-03 07:13:40.579664 | orchestrator | Tuesday 03 February 2026 07:12:47 +0000 (0:00:01.886) 1:18:00.718 ****** 2026-02-03 07:13:40.579671 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.579678 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.579685 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.579692 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:13:40.579699 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:13:40.579707 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:13:40.579714 | orchestrator | 2026-02-03 07:13:40.579722 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-03 07:13:40.579730 | orchestrator | Tuesday 03 February 2026 07:12:49 +0000 (0:00:01.904) 1:18:02.622 ****** 2026-02-03 07:13:40.579737 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.579745 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.579752 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.579759 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:13:40.579766 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:13:40.579773 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:13:40.579781 | orchestrator | 2026-02-03 07:13:40.579788 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-03 07:13:40.579796 | orchestrator | Tuesday 03 February 2026 07:12:51 +0000 (0:00:02.103) 1:18:04.726 ****** 2026-02-03 07:13:40.579804 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.579811 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:13:40.579819 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:13:40.579827 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:13:40.579835 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:13:40.579842 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:13:40.579849 | orchestrator | 2026-02-03 07:13:40.579857 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-03 07:13:40.579864 | orchestrator | Tuesday 03 February 2026 07:12:53 +0000 (0:00:01.809) 1:18:06.535 ****** 2026-02-03 07:13:40.579873 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.579881 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:13:40.579889 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:13:40.579898 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:13:40.579907 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:13:40.579915 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:13:40.579922 | orchestrator | 2026-02-03 07:13:40.579930 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-03 07:13:40.579937 | orchestrator | Tuesday 03 February 2026 07:12:55 +0000 (0:00:02.375) 1:18:08.911 ****** 2026-02-03 07:13:40.579944 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.579951 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:13:40.579958 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:13:40.579964 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:13:40.579971 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:13:40.579978 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:13:40.579984 | orchestrator | 2026-02-03 07:13:40.579990 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-03 07:13:40.579998 | orchestrator | Tuesday 03 February 2026 07:12:58 +0000 (0:00:02.600) 1:18:11.511 ****** 2026-02-03 07:13:40.580006 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.580014 | orchestrator | 2026-02-03 07:13:40.580021 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-03 07:13:40.580028 | orchestrator | Tuesday 03 February 2026 07:13:01 +0000 (0:00:03.346) 1:18:14.857 ****** 2026-02-03 07:13:40.580035 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.580066 | orchestrator | 2026-02-03 07:13:40.580098 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-03 07:13:40.580129 | orchestrator | Tuesday 03 February 2026 07:13:04 +0000 (0:00:03.297) 1:18:18.155 ****** 2026-02-03 07:13:40.580170 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.580201 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:13:40.580218 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:13:40.580224 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:13:40.580231 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:13:40.580237 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:13:40.580243 | orchestrator | 2026-02-03 07:13:40.580263 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-03 07:13:40.580270 | orchestrator | Tuesday 03 February 2026 07:13:07 +0000 (0:00:02.751) 1:18:20.907 ****** 2026-02-03 07:13:40.580277 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.580283 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:13:40.580289 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:13:40.580296 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:13:40.580303 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:13:40.580309 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:13:40.580316 | orchestrator | 2026-02-03 07:13:40.580323 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-03 07:13:40.580349 | orchestrator | Tuesday 03 February 2026 07:13:10 +0000 (0:00:02.579) 1:18:23.487 ****** 2026-02-03 07:13:40.580359 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-03 07:13:40.580367 | orchestrator | 2026-02-03 07:13:40.580374 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-03 07:13:40.580381 | orchestrator | Tuesday 03 February 2026 07:13:13 +0000 (0:00:02.753) 1:18:26.240 ****** 2026-02-03 07:13:40.580387 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.580393 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:13:40.580400 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:13:40.580407 | orchestrator | ok: [testbed-node-3] 2026-02-03 07:13:40.580414 | orchestrator | ok: [testbed-node-4] 2026-02-03 07:13:40.580421 | orchestrator | ok: [testbed-node-5] 2026-02-03 07:13:40.580424 | orchestrator | 2026-02-03 07:13:40.580428 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-03 07:13:40.580432 | orchestrator | Tuesday 03 February 2026 07:13:15 +0000 (0:00:02.829) 1:18:29.070 ****** 2026-02-03 07:13:40.580436 | orchestrator | changed: [testbed-node-0] 2026-02-03 07:13:40.580440 | orchestrator | changed: [testbed-node-3] 2026-02-03 07:13:40.580443 | orchestrator | changed: [testbed-node-1] 2026-02-03 07:13:40.580447 | orchestrator | changed: [testbed-node-4] 2026-02-03 07:13:40.580451 | orchestrator | changed: [testbed-node-2] 2026-02-03 07:13:40.580455 | orchestrator | changed: [testbed-node-5] 2026-02-03 07:13:40.580459 | orchestrator | 2026-02-03 07:13:40.580463 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-03 07:13:40.580467 | orchestrator | 2026-02-03 07:13:40.580470 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 07:13:40.580474 | orchestrator | Tuesday 03 February 2026 07:13:20 +0000 (0:00:05.032) 1:18:34.103 ****** 2026-02-03 07:13:40.580478 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.580482 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:13:40.580485 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:13:40.580489 | orchestrator | 2026-02-03 07:13:40.580493 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 07:13:40.580496 | orchestrator | Tuesday 03 February 2026 07:13:23 +0000 (0:00:02.218) 1:18:36.321 ****** 2026-02-03 07:13:40.580500 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.580504 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:13:40.580508 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:13:40.580511 | orchestrator | 2026-02-03 07:13:40.580515 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-03 07:13:40.580520 | orchestrator | Tuesday 03 February 2026 07:13:24 +0000 (0:00:01.468) 1:18:37.790 ****** 2026-02-03 07:13:40.580523 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:13:40.580533 | orchestrator | 2026-02-03 07:13:40.580537 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-03 07:13:40.580541 | orchestrator | Tuesday 03 February 2026 07:13:26 +0000 (0:00:02.391) 1:18:40.182 ****** 2026-02-03 07:13:40.580545 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.580549 | orchestrator | 2026-02-03 07:13:40.580553 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-03 07:13:40.580556 | orchestrator | 2026-02-03 07:13:40.580560 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-03 07:13:40.580564 | orchestrator | Tuesday 03 February 2026 07:13:29 +0000 (0:00:02.448) 1:18:42.630 ****** 2026-02-03 07:13:40.580568 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.580571 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.580575 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.580611 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:13:40.580616 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:13:40.580620 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:13:40.580623 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:13:40.580627 | orchestrator | 2026-02-03 07:13:40.580631 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 07:13:40.580635 | orchestrator | Tuesday 03 February 2026 07:13:31 +0000 (0:00:02.107) 1:18:44.738 ****** 2026-02-03 07:13:40.580638 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.580642 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.580646 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.580649 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:13:40.580653 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:13:40.580657 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:13:40.580660 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:13:40.580664 | orchestrator | 2026-02-03 07:13:40.580668 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-03 07:13:40.580672 | orchestrator | Tuesday 03 February 2026 07:13:34 +0000 (0:00:02.631) 1:18:47.369 ****** 2026-02-03 07:13:40.580675 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.580679 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.580683 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.580686 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:13:40.580690 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:13:40.580694 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:13:40.580697 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:13:40.580701 | orchestrator | 2026-02-03 07:13:40.580705 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-03 07:13:40.580709 | orchestrator | Tuesday 03 February 2026 07:13:37 +0000 (0:00:03.089) 1:18:50.459 ****** 2026-02-03 07:13:40.580712 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.580716 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.580720 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.580723 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:13:40.580732 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:13:40.580736 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:13:40.580739 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:13:40.580743 | orchestrator | 2026-02-03 07:13:40.580747 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-03 07:13:40.580750 | orchestrator | Tuesday 03 February 2026 07:13:39 +0000 (0:00:02.625) 1:18:53.084 ****** 2026-02-03 07:13:40.580754 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:13:40.580758 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:13:40.580762 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:13:40.580770 | orchestrator | skipping: [testbed-node-3] 2026-02-03 07:14:32.999677 | orchestrator | skipping: [testbed-node-4] 2026-02-03 07:14:32.999798 | orchestrator | skipping: [testbed-node-5] 2026-02-03 07:14:32.999814 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:32.999826 | orchestrator | 2026-02-03 07:14:32.999861 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-03 07:14:32.999874 | orchestrator | 2026-02-03 07:14:32.999885 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-03 07:14:32.999896 | orchestrator | Tuesday 03 February 2026 07:13:43 +0000 (0:00:03.232) 1:18:56.316 ****** 2026-02-03 07:14:32.999908 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-03 07:14:32.999945 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-03 07:14:32.999957 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-03 07:14:32.999968 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:32.999979 | orchestrator | 2026-02-03 07:14:32.999990 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-03 07:14:33.000001 | orchestrator | Tuesday 03 February 2026 07:13:44 +0000 (0:00:01.159) 1:18:57.476 ****** 2026-02-03 07:14:33.000013 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000036 | orchestrator | 2026-02-03 07:14:33.000047 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-03 07:14:33.000058 | orchestrator | Tuesday 03 February 2026 07:13:45 +0000 (0:00:01.222) 1:18:58.698 ****** 2026-02-03 07:14:33.000069 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000080 | orchestrator | 2026-02-03 07:14:33.000090 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-03 07:14:33.000101 | orchestrator | Tuesday 03 February 2026 07:13:46 +0000 (0:00:01.209) 1:18:59.908 ****** 2026-02-03 07:14:33.000112 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000123 | orchestrator | 2026-02-03 07:14:33.000134 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-03 07:14:33.000145 | orchestrator | Tuesday 03 February 2026 07:13:47 +0000 (0:00:01.170) 1:19:01.078 ****** 2026-02-03 07:14:33.000158 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000171 | orchestrator | 2026-02-03 07:14:33.000184 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-03 07:14:33.000209 | orchestrator | Tuesday 03 February 2026 07:13:49 +0000 (0:00:01.396) 1:19:02.475 ****** 2026-02-03 07:14:33.000223 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-03 07:14:33.000236 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-03 07:14:33.000249 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000263 | orchestrator | 2026-02-03 07:14:33.000276 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-03 07:14:33.000290 | orchestrator | Tuesday 03 February 2026 07:13:50 +0000 (0:00:01.207) 1:19:03.683 ****** 2026-02-03 07:14:33.000303 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000316 | orchestrator | 2026-02-03 07:14:33.000329 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-03 07:14:33.000342 | orchestrator | Tuesday 03 February 2026 07:13:51 +0000 (0:00:01.202) 1:19:04.885 ****** 2026-02-03 07:14:33.000355 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000368 | orchestrator | 2026-02-03 07:14:33.000382 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-03 07:14:33.000394 | orchestrator | Tuesday 03 February 2026 07:13:52 +0000 (0:00:01.220) 1:19:06.106 ****** 2026-02-03 07:14:33.000405 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000416 | orchestrator | 2026-02-03 07:14:33.000427 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-03 07:14:33.000438 | orchestrator | Tuesday 03 February 2026 07:13:54 +0000 (0:00:01.181) 1:19:07.287 ****** 2026-02-03 07:14:33.000449 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-03 07:14:33.000459 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-03 07:14:33.000470 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000481 | orchestrator | 2026-02-03 07:14:33.000492 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-03 07:14:33.000511 | orchestrator | Tuesday 03 February 2026 07:13:55 +0000 (0:00:01.248) 1:19:08.536 ****** 2026-02-03 07:14:33.000522 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000532 | orchestrator | 2026-02-03 07:14:33.000543 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-03 07:14:33.000554 | orchestrator | Tuesday 03 February 2026 07:13:56 +0000 (0:00:01.214) 1:19:09.750 ****** 2026-02-03 07:14:33.000565 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000606 | orchestrator | 2026-02-03 07:14:33.000617 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-03 07:14:33.000628 | orchestrator | Tuesday 03 February 2026 07:13:57 +0000 (0:00:01.139) 1:19:10.889 ****** 2026-02-03 07:14:33.000639 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000650 | orchestrator | 2026-02-03 07:14:33.000661 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-03 07:14:33.000686 | orchestrator | Tuesday 03 February 2026 07:13:58 +0000 (0:00:01.183) 1:19:12.073 ****** 2026-02-03 07:14:33.000697 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:14:33.000708 | orchestrator | 2026-02-03 07:14:33.000719 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-03 07:14:33.000730 | orchestrator | 2026-02-03 07:14:33.000741 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-03 07:14:33.000766 | orchestrator | Tuesday 03 February 2026 07:14:01 +0000 (0:00:02.138) 1:19:14.211 ****** 2026-02-03 07:14:33.000777 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:14:33.000788 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:14:33.000799 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:14:33.000810 | orchestrator | 2026-02-03 07:14:33.000820 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-03 07:14:33.000831 | orchestrator | Tuesday 03 February 2026 07:14:02 +0000 (0:00:01.654) 1:19:15.865 ****** 2026-02-03 07:14:33.000842 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:14:33.000853 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:14:33.000883 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:14:33.000894 | orchestrator | 2026-02-03 07:14:33.000905 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-03 07:14:33.000916 | orchestrator | Tuesday 03 February 2026 07:14:04 +0000 (0:00:01.783) 1:19:17.649 ****** 2026-02-03 07:14:33.000927 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:14:33.000937 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:14:33.000948 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:14:33.000959 | orchestrator | 2026-02-03 07:14:33.000969 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-03 07:14:33.000980 | orchestrator | Tuesday 03 February 2026 07:14:06 +0000 (0:00:01.689) 1:19:19.338 ****** 2026-02-03 07:14:33.000991 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:14:33.001002 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:14:33.001013 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:14:33.001024 | orchestrator | 2026-02-03 07:14:33.001035 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-03 07:14:33.001046 | orchestrator | Tuesday 03 February 2026 07:14:07 +0000 (0:00:01.464) 1:19:20.803 ****** 2026-02-03 07:14:33.001057 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:14:33.001068 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:14:33.001079 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:14:33.001089 | orchestrator | 2026-02-03 07:14:33.001100 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-03 07:14:33.001111 | orchestrator | Tuesday 03 February 2026 07:14:09 +0000 (0:00:01.645) 1:19:22.449 ****** 2026-02-03 07:14:33.001122 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:14:33.001133 | orchestrator | skipping: [testbed-node-1] 2026-02-03 07:14:33.001169 | orchestrator | skipping: [testbed-node-2] 2026-02-03 07:14:33.001181 | orchestrator | 2026-02-03 07:14:33.001192 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-03 07:14:33.001210 | orchestrator | Tuesday 03 February 2026 07:14:11 +0000 (0:00:01.875) 1:19:24.325 ****** 2026-02-03 07:14:33.001221 | orchestrator | skipping: [testbed-node-0] 2026-02-03 07:14:33.001232 | orchestrator | 2026-02-03 07:14:33.001243 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-03 07:14:33.001254 | orchestrator | 2026-02-03 07:14:33.001265 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-03 07:14:33.001276 | orchestrator | Tuesday 03 February 2026 07:14:12 +0000 (0:00:01.650) 1:19:25.976 ****** 2026-02-03 07:14:33.001287 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:14:33.001298 | orchestrator | 2026-02-03 07:14:33.001308 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-03 07:14:33.001319 | orchestrator | Tuesday 03 February 2026 07:14:14 +0000 (0:00:01.523) 1:19:27.499 ****** 2026-02-03 07:14:33.001330 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:14:33.001341 | orchestrator | 2026-02-03 07:14:33.001352 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-03 07:14:33.001362 | orchestrator | Tuesday 03 February 2026 07:14:15 +0000 (0:00:01.267) 1:19:28.766 ****** 2026-02-03 07:14:33.001373 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:14:33.001384 | orchestrator | 2026-02-03 07:14:33.001395 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-03 07:14:33.001406 | orchestrator | Tuesday 03 February 2026 07:14:16 +0000 (0:00:01.205) 1:19:29.972 ****** 2026-02-03 07:14:33.001416 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:14:33.001427 | orchestrator | 2026-02-03 07:14:33.001438 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-03 07:14:33.001449 | orchestrator | Tuesday 03 February 2026 07:14:19 +0000 (0:00:03.031) 1:19:33.004 ****** 2026-02-03 07:14:33.001460 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:14:33.001470 | orchestrator | 2026-02-03 07:14:33.001481 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-03 07:14:33.001492 | orchestrator | Tuesday 03 February 2026 07:14:23 +0000 (0:00:03.634) 1:19:36.638 ****** 2026-02-03 07:14:33.001503 | orchestrator | changed: [testbed-node-0] 2026-02-03 07:14:33.001514 | orchestrator | 2026-02-03 07:14:33.001525 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-03 07:14:33.001535 | orchestrator | 2026-02-03 07:14:33.001546 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-03 07:14:33.001557 | orchestrator | Tuesday 03 February 2026 07:14:25 +0000 (0:00:01.948) 1:19:38.586 ****** 2026-02-03 07:14:33.001587 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:14:33.001600 | orchestrator | ok: [testbed-node-1] 2026-02-03 07:14:33.001611 | orchestrator | ok: [testbed-node-2] 2026-02-03 07:14:33.001622 | orchestrator | 2026-02-03 07:14:33.001633 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-03 07:14:33.001644 | orchestrator | Tuesday 03 February 2026 07:14:27 +0000 (0:00:01.617) 1:19:40.203 ****** 2026-02-03 07:14:33.001655 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:14:33.001666 | orchestrator | 2026-02-03 07:14:33.001677 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-03 07:14:33.001688 | orchestrator | Tuesday 03 February 2026 07:14:29 +0000 (0:00:02.283) 1:19:42.487 ****** 2026-02-03 07:14:33.001698 | orchestrator | ok: [testbed-node-0] 2026-02-03 07:14:33.001709 | orchestrator | 2026-02-03 07:14:33.001720 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 07:14:33.001732 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-03 07:14:33.001750 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-03 07:14:33.001763 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-02-03 07:14:33.001780 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-02-03 07:14:33.001799 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-02-03 07:14:33.910294 | orchestrator | testbed-node-3 : ok=317  changed=20  unreachable=0 failed=0 skipped=355  rescued=0 ignored=0 2026-02-03 07:14:33.910394 | orchestrator | testbed-node-4 : ok=307  changed=17  unreachable=0 failed=0 skipped=352  rescued=0 ignored=0 2026-02-03 07:14:33.910411 | orchestrator | testbed-node-5 : ok=303  changed=17  unreachable=0 failed=0 skipped=337  rescued=0 ignored=0 2026-02-03 07:14:33.910424 | orchestrator | 2026-02-03 07:14:33.910437 | orchestrator | 2026-02-03 07:14:33.910448 | orchestrator | 2026-02-03 07:14:33.910460 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 07:14:33.910473 | orchestrator | Tuesday 03 February 2026 07:14:32 +0000 (0:00:03.667) 1:19:46.155 ****** 2026-02-03 07:14:33.910484 | orchestrator | =============================================================================== 2026-02-03 07:14:33.910495 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 76.78s 2026-02-03 07:14:33.910506 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 76.38s 2026-02-03 07:14:33.910517 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 33.65s 2026-02-03 07:14:33.910528 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.46s 2026-02-03 07:14:33.910540 | orchestrator | Gather and delegate facts ---------------------------------------------- 32.30s 2026-02-03 07:14:33.910551 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.18s 2026-02-03 07:14:33.910561 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 31.68s 2026-02-03 07:14:33.910632 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 30.29s 2026-02-03 07:14:33.910645 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 23.76s 2026-02-03 07:14:33.910656 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.27s 2026-02-03 07:14:33.910667 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.22s 2026-02-03 07:14:33.910677 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 18.39s 2026-02-03 07:14:33.910689 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 17.04s 2026-02-03 07:14:33.910700 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.38s 2026-02-03 07:14:33.910711 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 15.06s 2026-02-03 07:14:33.910721 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 13.60s 2026-02-03 07:14:33.910732 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.92s 2026-02-03 07:14:33.910743 | orchestrator | Stop ceph osd ---------------------------------------------------------- 12.24s 2026-02-03 07:14:33.910753 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.62s 2026-02-03 07:14:33.910764 | orchestrator | Stop standby ceph mds -------------------------------------------------- 11.38s 2026-02-03 07:14:34.357210 | orchestrator | + osism apply cephclient 2026-02-03 07:14:36.635886 | orchestrator | 2026-02-03 07:14:36 | INFO  | Task e2e42d6d-d4a1-44ae-a46f-8252911422db (cephclient) was prepared for execution. 2026-02-03 07:14:36.635960 | orchestrator | 2026-02-03 07:14:36 | INFO  | It takes a moment until task e2e42d6d-d4a1-44ae-a46f-8252911422db (cephclient) has been started and output is visible here. 2026-02-03 07:15:07.189341 | orchestrator | 2026-02-03 07:15:07.189424 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-03 07:15:07.189449 | orchestrator | 2026-02-03 07:15:07.189454 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-03 07:15:07.189459 | orchestrator | Tuesday 03 February 2026 07:14:44 +0000 (0:00:02.161) 0:00:02.161 ****** 2026-02-03 07:15:07.189464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-03 07:15:07.189470 | orchestrator | 2026-02-03 07:15:07.189475 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-03 07:15:07.189480 | orchestrator | Tuesday 03 February 2026 07:14:46 +0000 (0:00:02.051) 0:00:04.213 ****** 2026-02-03 07:15:07.189485 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-03 07:15:07.189490 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-03 07:15:07.189495 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-03 07:15:07.189500 | orchestrator | 2026-02-03 07:15:07.189504 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-03 07:15:07.189519 | orchestrator | Tuesday 03 February 2026 07:14:48 +0000 (0:00:02.700) 0:00:06.913 ****** 2026-02-03 07:15:07.189525 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-03 07:15:07.189533 | orchestrator | 2026-02-03 07:15:07.189540 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-03 07:15:07.189547 | orchestrator | Tuesday 03 February 2026 07:14:51 +0000 (0:00:02.258) 0:00:09.172 ****** 2026-02-03 07:15:07.189554 | orchestrator | ok: [testbed-manager] 2026-02-03 07:15:07.189605 | orchestrator | 2026-02-03 07:15:07.189615 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-03 07:15:07.189622 | orchestrator | Tuesday 03 February 2026 07:14:53 +0000 (0:00:01.974) 0:00:11.146 ****** 2026-02-03 07:15:07.189629 | orchestrator | ok: [testbed-manager] 2026-02-03 07:15:07.189637 | orchestrator | 2026-02-03 07:15:07.189644 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-03 07:15:07.189652 | orchestrator | Tuesday 03 February 2026 07:14:54 +0000 (0:00:01.905) 0:00:13.051 ****** 2026-02-03 07:15:07.189659 | orchestrator | ok: [testbed-manager] 2026-02-03 07:15:07.189668 | orchestrator | 2026-02-03 07:15:07.189676 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-03 07:15:07.189683 | orchestrator | Tuesday 03 February 2026 07:14:57 +0000 (0:00:02.209) 0:00:15.261 ****** 2026-02-03 07:15:07.189690 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-03 07:15:07.189698 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-03 07:15:07.189706 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-03 07:15:07.189713 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-03 07:15:07.189720 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-03 07:15:07.189728 | orchestrator | 2026-02-03 07:15:07.189734 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-03 07:15:07.189741 | orchestrator | Tuesday 03 February 2026 07:15:02 +0000 (0:00:05.263) 0:00:20.525 ****** 2026-02-03 07:15:07.189748 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-03 07:15:07.189756 | orchestrator | 2026-02-03 07:15:07.189764 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-03 07:15:07.189772 | orchestrator | Tuesday 03 February 2026 07:15:04 +0000 (0:00:01.554) 0:00:22.079 ****** 2026-02-03 07:15:07.189780 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:15:07.189787 | orchestrator | 2026-02-03 07:15:07.189794 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-03 07:15:07.189801 | orchestrator | Tuesday 03 February 2026 07:15:05 +0000 (0:00:01.184) 0:00:23.263 ****** 2026-02-03 07:15:07.189806 | orchestrator | skipping: [testbed-manager] 2026-02-03 07:15:07.189810 | orchestrator | 2026-02-03 07:15:07.189816 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-03 07:15:07.189830 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-03 07:15:07.189838 | orchestrator | 2026-02-03 07:15:07.189845 | orchestrator | 2026-02-03 07:15:07.189853 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-03 07:15:07.189860 | orchestrator | Tuesday 03 February 2026 07:15:06 +0000 (0:00:01.613) 0:00:24.877 ****** 2026-02-03 07:15:07.189868 | orchestrator | =============================================================================== 2026-02-03 07:15:07.189875 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 5.26s 2026-02-03 07:15:07.189883 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.70s 2026-02-03 07:15:07.189890 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.26s 2026-02-03 07:15:07.189897 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.21s 2026-02-03 07:15:07.189904 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 2.05s 2026-02-03 07:15:07.189913 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.97s 2026-02-03 07:15:07.189918 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.91s 2026-02-03 07:15:07.189923 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.61s 2026-02-03 07:15:07.189928 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.55s 2026-02-03 07:15:07.189934 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.18s 2026-02-03 07:15:07.601638 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-03 07:15:07.601760 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-03 07:15:07.610899 | orchestrator | + set -e 2026-02-03 07:15:07.611325 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-03 07:15:07.611359 | orchestrator | ++ export INTERACTIVE=false 2026-02-03 07:15:07.611371 | orchestrator | ++ INTERACTIVE=false 2026-02-03 07:15:07.611382 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-03 07:15:07.611394 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-03 07:15:07.611406 | orchestrator | + source /opt/manager-vars.sh 2026-02-03 07:15:07.611417 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-03 07:15:07.611428 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-03 07:15:07.611438 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-03 07:15:07.611449 | orchestrator | ++ CEPH_VERSION=reef 2026-02-03 07:15:07.611460 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-03 07:15:07.611472 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-03 07:15:07.611483 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-03 07:15:07.611497 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-03 07:15:07.611514 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-03 07:15:07.611526 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-03 07:15:07.611536 | orchestrator | ++ export ARA=false 2026-02-03 07:15:07.611547 | orchestrator | ++ ARA=false 2026-02-03 07:15:07.611558 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-03 07:15:07.611606 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-03 07:15:07.611618 | orchestrator | ++ export TEMPEST=false 2026-02-03 07:15:07.611629 | orchestrator | ++ TEMPEST=false 2026-02-03 07:15:07.611640 | orchestrator | ++ export IS_ZUUL=true 2026-02-03 07:15:07.611651 | orchestrator | ++ IS_ZUUL=true 2026-02-03 07:15:07.611662 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 07:15:07.611673 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-03 07:15:07.611684 | orchestrator | ++ export EXTERNAL_API=false 2026-02-03 07:15:07.611695 | orchestrator | ++ EXTERNAL_API=false 2026-02-03 07:15:07.611705 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-03 07:15:07.611716 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-03 07:15:07.611727 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-03 07:15:07.611738 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-03 07:15:07.611749 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-03 07:15:07.611760 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-03 07:15:07.611771 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-03 07:15:07.611803 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-03 07:15:07.611814 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-03 07:15:07.612376 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-03 07:15:07.617669 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-03 07:15:07.617713 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-03 07:15:07.617726 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-03 07:15:07.617737 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-03 07:15:31.821840 | orchestrator | 2026-02-03 07:15:31 | ERROR  | Unable to get ansible vault password 2026-02-03 07:15:31.821951 | orchestrator | 2026-02-03 07:15:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-03 07:15:31.821969 | orchestrator | 2026-02-03 07:15:31 | ERROR  | Dropping encrypted entries 2026-02-03 07:15:31.873388 | orchestrator | 2026-02-03 07:15:31 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-03 07:15:31.875628 | orchestrator | 2026-02-03 07:15:31 | INFO  | Kolla configuration check passed 2026-02-03 07:15:32.083697 | orchestrator | 2026-02-03 07:15:32 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-03 07:15:32.099670 | orchestrator | 2026-02-03 07:15:32 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-03 07:15:32.449003 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-03 07:15:54.848144 | orchestrator | 2026-02-03 07:15:54 | ERROR  | Unable to get ansible vault password 2026-02-03 07:15:54.848262 | orchestrator | 2026-02-03 07:15:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-03 07:15:54.848281 | orchestrator | 2026-02-03 07:15:54 | ERROR  | Dropping encrypted entries 2026-02-03 07:15:54.887097 | orchestrator | 2026-02-03 07:15:54 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-03 07:15:55.041452 | orchestrator | 2026-02-03 07:15:55 | INFO  | Found 208 classic queue(s) in vhost '/': 2026-02-03 07:15:55.041599 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-03 07:15:55.042197 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-03 07:15:55.042714 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-03 07:15:55.066140 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-03 07:15:55.066376 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - barbican.workers_fanout_730e1da10d3c4e66888159d9c314c073 (vhost: /, messages: 0) 2026-02-03 07:15:55.066397 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - barbican.workers_fanout_9de8d38d354344f0a53c8996decb1430 (vhost: /, messages: 0) 2026-02-03 07:15:55.066410 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - barbican.workers_fanout_ee989a74f83e42e8b86e10afb3e157d5 (vhost: /, messages: 0) 2026-02-03 07:15:55.066430 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-03 07:15:55.066449 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central (vhost: /, messages: 0) 2026-02-03 07:15:55.066684 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.066701 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.066712 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.066724 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central_fanout_1c0672d0e0a643fca98f2e79614ad2a9 (vhost: /, messages: 0) 2026-02-03 07:15:55.066752 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central_fanout_7339749f8b5648b2abfdb74fc4e610b8 (vhost: /, messages: 0) 2026-02-03 07:15:55.066809 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central_fanout_7da8e207adf24b5a978d7143f81f4262 (vhost: /, messages: 0) 2026-02-03 07:15:55.066827 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central_fanout_9da9eabff8544827bd2e7dad085d0cc7 (vhost: /, messages: 0) 2026-02-03 07:15:55.066845 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central_fanout_a264311d10f944b495fe9dac5ed492b5 (vhost: /, messages: 0) 2026-02-03 07:15:55.066863 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - central_fanout_c099db7b776d4253aa3f3f0ab28c6616 (vhost: /, messages: 0) 2026-02-03 07:15:55.066900 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-03 07:15:55.067312 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.067478 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.067629 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.067658 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-backup_fanout_503479a5f2f442dcbcff975cdea7668a (vhost: /, messages: 0) 2026-02-03 07:15:55.067980 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-backup_fanout_8ba12fb033ac443c876d08cb7e26c506 (vhost: /, messages: 0) 2026-02-03 07:15:55.068269 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-backup_fanout_ebce385f73d4450dbf9079ae72b09ecc (vhost: /, messages: 0) 2026-02-03 07:15:55.069147 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-03 07:15:55.069257 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.073168 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.073205 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.073215 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-scheduler_fanout_228dc68dbf5f498585dd0a18576a60ec (vhost: /, messages: 0) 2026-02-03 07:15:55.073226 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-scheduler_fanout_947f7396240f4e4fa92de7f20e2f9e4e (vhost: /, messages: 0) 2026-02-03 07:15:55.073236 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-scheduler_fanout_fa8c591be8b144609cea27abe7354a60 (vhost: /, messages: 0) 2026-02-03 07:15:55.073246 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-03 07:15:55.073257 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-03 07:15:55.073267 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.073277 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_d86ac08405e1471483385777d75382b3 (vhost: /, messages: 0) 2026-02-03 07:15:55.073287 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-03 07:15:55.073297 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.073307 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_66d3358bbf3c4db094f605120a248566 (vhost: /, messages: 0) 2026-02-03 07:15:55.073335 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-03 07:15:55.073345 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.073354 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_1b6eb441ee854f39b19e87b4d1bb0541 (vhost: /, messages: 0) 2026-02-03 07:15:55.073364 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume_fanout_74dbc224c3d14564814547318936f25c (vhost: /, messages: 0) 2026-02-03 07:15:55.073374 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume_fanout_c114bd5039414c4dbb96e14ee74f87bf (vhost: /, messages: 0) 2026-02-03 07:15:55.073383 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - cinder-volume_fanout_c1b30c8443904141bbcbfbaafbd4bd23 (vhost: /, messages: 0) 2026-02-03 07:15:55.073393 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-03 07:15:55.073403 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-03 07:15:55.073413 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-03 07:15:55.078892 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-03 07:15:55.078950 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - compute_fanout_0319355667a94cc2b69281c580d97bd0 (vhost: /, messages: 0) 2026-02-03 07:15:55.078988 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - compute_fanout_07ed56e30070409097019bae27367b6c (vhost: /, messages: 0) 2026-02-03 07:15:55.079010 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - compute_fanout_f551eb0530144df1ad4d5888c6ed23bc (vhost: /, messages: 0) 2026-02-03 07:15:55.079029 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-03 07:15:55.079048 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.079066 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.079086 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.079106 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor_fanout_96ae0b6debd746f6bac93590d8f88beb (vhost: /, messages: 0) 2026-02-03 07:15:55.079124 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor_fanout_9e186b37177545fa9f986e6c6ac0b402 (vhost: /, messages: 0) 2026-02-03 07:15:55.079142 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor_fanout_a1a5c6f98f8441ffb79fd5f4b4c6f177 (vhost: /, messages: 0) 2026-02-03 07:15:55.079160 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor_fanout_c11d665fddcc4ca9a0f79105af74ec6f (vhost: /, messages: 0) 2026-02-03 07:15:55.079178 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor_fanout_e03d0c7a05714b029e75403a7be18b78 (vhost: /, messages: 0) 2026-02-03 07:15:55.079196 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - conductor_fanout_fa9681bec1984af7aacc3a6f924ebff0 (vhost: /, messages: 0) 2026-02-03 07:15:55.079216 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-03 07:15:55.079234 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor.3gguono63tcd (vhost: /, messages: 0) 2026-02-03 07:15:55.079254 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor.3nzfrqpkeivi (vhost: /, messages: 0) 2026-02-03 07:15:55.079272 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor.psiwb3v3wrzy (vhost: /, messages: 0) 2026-02-03 07:15:55.079313 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor_fanout_2e377eab54c84e84b650adb8d89c9277 (vhost: /, messages: 0) 2026-02-03 07:15:55.079334 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor_fanout_3062ba2f3e3649dbbb6d23c429504078 (vhost: /, messages: 0) 2026-02-03 07:15:55.079352 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor_fanout_31a4dfa256f34d58a207ae6784e61567 (vhost: /, messages: 0) 2026-02-03 07:15:55.079370 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor_fanout_41967feb350a47fc9a5f39ec0866130a (vhost: /, messages: 0) 2026-02-03 07:15:55.079382 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor_fanout_67e76c70945b41209673b2d3e7fc96f0 (vhost: /, messages: 0) 2026-02-03 07:15:55.079392 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor_fanout_6927d919c9a54dd7854f0ac1ddd6d40e (vhost: /, messages: 0) 2026-02-03 07:15:55.079403 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor_fanout_81c6f52f89fd47d0b145c679b0437d63 (vhost: /, messages: 0) 2026-02-03 07:15:55.079414 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor_fanout_9cfcd0e9bec7440dad2aa1861b9c46fe (vhost: /, messages: 0) 2026-02-03 07:15:55.079425 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - magnum-conductor_fanout_db8021183da84ccaa3f2d09c57fb2c1d (vhost: /, messages: 0) 2026-02-03 07:15:55.079436 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-03 07:15:55.079446 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.079459 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.079472 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.079486 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-data_fanout_26c178a5d26947daaafd84da062f796c (vhost: /, messages: 0) 2026-02-03 07:15:55.079517 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-data_fanout_463e0192edc54c15845da2b8d90ee91c (vhost: /, messages: 0) 2026-02-03 07:15:55.079530 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-data_fanout_46c5c76a23f946a0886042fea1eca4e3 (vhost: /, messages: 0) 2026-02-03 07:15:55.079544 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-03 07:15:55.090126 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.090437 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.090475 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.090488 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-scheduler_fanout_6611331030a541d0aeb7b74a336fc4b1 (vhost: /, messages: 0) 2026-02-03 07:15:55.090500 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-scheduler_fanout_7d57e72b3181450ea1a86caa9d2d96f8 (vhost: /, messages: 0) 2026-02-03 07:15:55.090512 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-scheduler_fanout_ac31d3243b5c4e11b84cfe35f975e183 (vhost: /, messages: 0) 2026-02-03 07:15:55.090523 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-03 07:15:55.090534 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-03 07:15:55.090545 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-03 07:15:55.090625 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-03 07:15:55.090639 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-share_fanout_1e2e6f2b15094a8ab07da61b2b686791 (vhost: /, messages: 0) 2026-02-03 07:15:55.090651 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-share_fanout_4c94b97406284f149021c40a3295b567 (vhost: /, messages: 0) 2026-02-03 07:15:55.090661 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - manila-share_fanout_8f1865cb56fc4698bc58363351c4df87 (vhost: /, messages: 0) 2026-02-03 07:15:55.090673 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-03 07:15:55.090684 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-03 07:15:55.090695 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-03 07:15:55.090706 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-03 07:15:55.090717 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-03 07:15:55.090728 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-03 07:15:55.090739 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-03 07:15:55.090750 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-03 07:15:55.090761 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.090771 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.090782 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.090793 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - octavia_provisioning_v2_fanout_1d7f66781c9a4339bfaafb5a35eb0d09 (vhost: /, messages: 0) 2026-02-03 07:15:55.090805 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - octavia_provisioning_v2_fanout_310f89560135480da6a2f5f0d8e928c3 (vhost: /, messages: 0) 2026-02-03 07:15:55.090816 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - octavia_provisioning_v2_fanout_d4168bd3aadc4e4dbc13d2830511a3e8 (vhost: /, messages: 0) 2026-02-03 07:15:55.090827 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-03 07:15:55.090837 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.090848 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.090859 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.090870 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer_fanout_15656348dc414875815c89ef2074f307 (vhost: /, messages: 0) 2026-02-03 07:15:55.090886 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer_fanout_18872904894641ce9ca83ed4d5186526 (vhost: /, messages: 0) 2026-02-03 07:15:55.090916 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer_fanout_4966d8d9874f4afaad6eeeedd4945b43 (vhost: /, messages: 0) 2026-02-03 07:15:55.090928 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer_fanout_781d95a5f83843e9b24eccd41b2d0a82 (vhost: /, messages: 0) 2026-02-03 07:15:55.090940 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer_fanout_8b9210a5f2a645ce9f2d72974ee083d2 (vhost: /, messages: 0) 2026-02-03 07:15:55.090957 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - producer_fanout_dea5c74c0f7b4d76b258eab33a506aae (vhost: /, messages: 0) 2026-02-03 07:15:55.090972 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-03 07:15:55.090987 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.090999 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.091012 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.091025 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin_fanout_207ff539447e484b99c8a8941d63c601 (vhost: /, messages: 0) 2026-02-03 07:15:55.091039 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin_fanout_425146a6d95c43548d2ecc9107ff1045 (vhost: /, messages: 0) 2026-02-03 07:15:55.093220 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin_fanout_83171b53c8274fba90fcdb5b2eb18e53 (vhost: /, messages: 0) 2026-02-03 07:15:55.093248 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin_fanout_84037edbf5d843b7a702e12ccce5901d (vhost: /, messages: 0) 2026-02-03 07:15:55.093259 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin_fanout_94f71e44694d40cc8f800054f529bb3f (vhost: /, messages: 0) 2026-02-03 07:15:55.093270 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin_fanout_a748df75ac8c423bb03024418aa83ca2 (vhost: /, messages: 0) 2026-02-03 07:15:55.093281 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin_fanout_ad385f478eb1407c8008f640a8288a79 (vhost: /, messages: 0) 2026-02-03 07:15:55.093292 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin_fanout_e31485d353a24b00afbe5014d4d125a8 (vhost: /, messages: 0) 2026-02-03 07:15:55.093303 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-plugin_fanout_f2c0fc6b26964f28bc885a53aac739d2 (vhost: /, messages: 0) 2026-02-03 07:15:55.104301 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-03 07:15:55.104352 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.104361 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.104367 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.104374 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_06612cee0f254596805a603c9185113f (vhost: /, messages: 0) 2026-02-03 07:15:55.104382 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_1211efe2a1e64f1d9f5ddbc157a2b600 (vhost: /, messages: 0) 2026-02-03 07:15:55.104388 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_17c41fee65854331bfcb675732cfbf00 (vhost: /, messages: 0) 2026-02-03 07:15:55.104394 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_248dd87150514776963360ec0e4d2ab5 (vhost: /, messages: 0) 2026-02-03 07:15:55.104400 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_28fe8bb97a7840898834f72480a8c3f6 (vhost: /, messages: 0) 2026-02-03 07:15:55.104406 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_4764f36563984e068cca41e2bf94c63b (vhost: /, messages: 0) 2026-02-03 07:15:55.104412 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_5139b4f3eea64b54b97e97768e476ca8 (vhost: /, messages: 0) 2026-02-03 07:15:55.104419 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_7297ab2df93c4456b1f5bd1195506b28 (vhost: /, messages: 0) 2026-02-03 07:15:55.104436 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_7ad02bd5bc9d4c5db195b1627f53fd1f (vhost: /, messages: 0) 2026-02-03 07:15:55.104449 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_9915e4b92a0f4e74ab332f4ff7f20505 (vhost: /, messages: 0) 2026-02-03 07:15:55.104455 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_9a7b69b535554a76ae907c434d8f2748 (vhost: /, messages: 0) 2026-02-03 07:15:55.104461 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_a5cef65c9f8e4b968d43d7457dec4213 (vhost: /, messages: 0) 2026-02-03 07:15:55.104467 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_af1654847b564a36966e1f489b9d7ba2 (vhost: /, messages: 0) 2026-02-03 07:15:55.104474 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_b823d89bdd3941938ba910c58c3043f8 (vhost: /, messages: 0) 2026-02-03 07:15:55.104480 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_bced82d7b6aa4666b16f3d2535e72f24 (vhost: /, messages: 0) 2026-02-03 07:15:55.104486 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_d893f80493f84167900a19f4c7df2f7f (vhost: /, messages: 0) 2026-02-03 07:15:55.104492 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_f2de15b8df164d3bb7f21262a63fff9e (vhost: /, messages: 0) 2026-02-03 07:15:55.104498 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-reports-plugin_fanout_f92ceb8ae4794b2193364bcc6b543c26 (vhost: /, messages: 0) 2026-02-03 07:15:55.104505 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-03 07:15:55.104511 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.104517 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.104523 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.104529 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions_fanout_10121b87e9f846c3bd8e4fb82ce996c1 (vhost: /, messages: 0) 2026-02-03 07:15:55.104536 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions_fanout_28a80245fe974f13af56aaf161a85203 (vhost: /, messages: 0) 2026-02-03 07:15:55.104543 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions_fanout_38c5371eb021465b895049b0a62f9a87 (vhost: /, messages: 0) 2026-02-03 07:15:55.104579 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions_fanout_4a7979b7872d402fa22eae9acf1be50e (vhost: /, messages: 0) 2026-02-03 07:15:55.104587 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions_fanout_7809f1fb0cea450f94b0efd1547b12dc (vhost: /, messages: 0) 2026-02-03 07:15:55.104593 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions_fanout_85e70a368fe4484a9249208957d6efde (vhost: /, messages: 0) 2026-02-03 07:15:55.104599 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions_fanout_8c6366ba2fcc436083313cf6feb5b531 (vhost: /, messages: 0) 2026-02-03 07:15:55.104624 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions_fanout_bb8325f3db6f4423b66567c360981df1 (vhost: /, messages: 0) 2026-02-03 07:15:55.104630 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - q-server-resource-versions_fanout_f2d57f4a7da046c684d0c2154fe44cfb (vhost: /, messages: 0) 2026-02-03 07:15:55.104642 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_07e7ff1474244edd815c7c0597ea7b3d (vhost: /, messages: 0) 2026-02-03 07:15:55.104649 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_30ab444a628a431f8a8e3d6479e76823 (vhost: /, messages: 0) 2026-02-03 07:15:55.104655 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_4c83cc1fb63c4d7a8cb0afb37d685e7b (vhost: /, messages: 0) 2026-02-03 07:15:55.104662 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_6ff466e9e72f4658a2dc84d3b2e0eeb0 (vhost: /, messages: 0) 2026-02-03 07:15:55.104668 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_715fa3e38c2a48ae8a46b88afcedfbf9 (vhost: /, messages: 0) 2026-02-03 07:15:55.104674 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_7dd6acc62a164170b81269a6eecb6d6a (vhost: /, messages: 0) 2026-02-03 07:15:55.104680 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_8788b3a9195e44a393e77b0b10766457 (vhost: /, messages: 0) 2026-02-03 07:15:55.104687 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_8e3cd47774674a46aabc30e946e2bc57 (vhost: /, messages: 0) 2026-02-03 07:15:55.104696 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_94dc3edda0d0410db371691d99dd3027 (vhost: /, messages: 0) 2026-02-03 07:15:55.104702 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_a5d8aabd1292400d9837601850577493 (vhost: /, messages: 0) 2026-02-03 07:15:55.104709 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_a84e2c4f792749c9aa09c40a8173fe8a (vhost: /, messages: 0) 2026-02-03 07:15:55.104715 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_c50dca8b2ad24058a0f04b783d9f589d (vhost: /, messages: 0) 2026-02-03 07:15:55.104721 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_ce67261d6be8416c881f19a703e9a1cc (vhost: /, messages: 0) 2026-02-03 07:15:55.104729 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_cf873aa7a63d462bb79f0e2afd7a3dc9 (vhost: /, messages: 0) 2026-02-03 07:15:55.104739 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_d57c009be5ba427d9884ec708ae7b562 (vhost: /, messages: 0) 2026-02-03 07:15:55.104749 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_d5b8023dc38f408684b449431c5e3994 (vhost: /, messages: 0) 2026-02-03 07:15:55.104759 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_e7c929b708514abcbababd14c6afaaf0 (vhost: /, messages: 0) 2026-02-03 07:15:55.104768 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_ea502bc229d742e7ae5b71f04eda8e14 (vhost: /, messages: 0) 2026-02-03 07:15:55.104776 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_f32e64e1696e4d659bdbc1667d8fe0be (vhost: /, messages: 0) 2026-02-03 07:15:55.104784 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - reply_f83cca30554a43e4b0ecb30af58ea5e5 (vhost: /, messages: 0) 2026-02-03 07:15:55.104793 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-03 07:15:55.104802 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.104811 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.104821 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.104844 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler_fanout_09de478608cb4d4f93777bcef544e051 (vhost: /, messages: 0) 2026-02-03 07:15:55.104855 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler_fanout_1ed24bf0b9994d07bac65a55bd44758c (vhost: /, messages: 0) 2026-02-03 07:15:55.104874 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler_fanout_3588b2edc6d54c18b998c64c1bc5839c (vhost: /, messages: 0) 2026-02-03 07:15:55.104886 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler_fanout_8a2d002155d24015ac456f887d3c6d87 (vhost: /, messages: 0) 2026-02-03 07:15:55.104894 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler_fanout_e688dd4e07664670acd1d53d1cccac39 (vhost: /, messages: 0) 2026-02-03 07:15:55.104902 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - scheduler_fanout_e76976c38b54401ca6085f97b5e1d01c (vhost: /, messages: 0) 2026-02-03 07:15:55.104910 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-03 07:15:55.104917 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-03 07:15:55.104924 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-03 07:15:55.104931 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-03 07:15:55.104939 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker_fanout_361aeec036c645c8b72387c2d3ff8f9a (vhost: /, messages: 0) 2026-02-03 07:15:55.104947 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker_fanout_7e55c49733494c59abd8be76a3aabc56 (vhost: /, messages: 0) 2026-02-03 07:15:55.104957 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker_fanout_8128a0e8063f42c288564b8933a3b647 (vhost: /, messages: 0) 2026-02-03 07:15:55.104967 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker_fanout_87955a3a4ebb41888f2734b0c2e8db90 (vhost: /, messages: 0) 2026-02-03 07:15:55.104978 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker_fanout_bbe5180ffbef41a288b84f9f387dabed (vhost: /, messages: 0) 2026-02-03 07:15:55.104988 | orchestrator | 2026-02-03 07:15:55 | INFO  |  - worker_fanout_df8509b813c64b51b768735188127243 (vhost: /, messages: 0) 2026-02-03 07:15:55.524496 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-03 07:15:57.728379 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-03 07:15:57.728459 | orchestrator | [--no-close-connections] [--quorum] 2026-02-03 07:15:57.728487 | orchestrator | [--vhost VHOST] 2026-02-03 07:15:57.728495 | orchestrator | [{list,delete,prepare,check}] 2026-02-03 07:15:57.728504 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-03 07:15:57.728513 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-03 07:15:58.556682 | orchestrator | ERROR 2026-02-03 07:15:58.556941 | orchestrator | { 2026-02-03 07:15:58.556985 | orchestrator | "delta": "2:13:04.126298", 2026-02-03 07:15:58.557011 | orchestrator | "end": "2026-02-03 07:15:58.121458", 2026-02-03 07:15:58.557034 | orchestrator | "msg": "non-zero return code", 2026-02-03 07:15:58.557056 | orchestrator | "rc": 2, 2026-02-03 07:15:58.557076 | orchestrator | "start": "2026-02-03 05:02:53.995160" 2026-02-03 07:15:58.557096 | orchestrator | } failure 2026-02-03 07:15:58.799096 | 2026-02-03 07:15:58.799259 | PLAY RECAP 2026-02-03 07:15:58.799324 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-03 07:15:58.799350 | 2026-02-03 07:15:59.049517 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-03 07:15:59.050642 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-03 07:15:59.821259 | 2026-02-03 07:15:59.821520 | PLAY [Post output play] 2026-02-03 07:15:59.837510 | 2026-02-03 07:15:59.837731 | LOOP [stage-output : Register sources] 2026-02-03 07:15:59.908155 | 2026-02-03 07:15:59.908472 | TASK [stage-output : Check sudo] 2026-02-03 07:16:00.775436 | orchestrator | sudo: a password is required 2026-02-03 07:16:00.947224 | orchestrator | ok: Runtime: 0:00:00.016783 2026-02-03 07:16:00.962563 | 2026-02-03 07:16:00.962723 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-03 07:16:00.998828 | 2026-02-03 07:16:00.999162 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-03 07:16:01.068435 | orchestrator | ok 2026-02-03 07:16:01.077650 | 2026-02-03 07:16:01.077788 | LOOP [stage-output : Ensure target folders exist] 2026-02-03 07:16:01.533798 | orchestrator | ok: "docs" 2026-02-03 07:16:01.534305 | 2026-02-03 07:16:01.786709 | orchestrator | ok: "artifacts" 2026-02-03 07:16:02.047905 | orchestrator | ok: "logs" 2026-02-03 07:16:02.061530 | 2026-02-03 07:16:02.061702 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-03 07:16:02.095214 | 2026-02-03 07:16:02.095433 | TASK [stage-output : Make all log files readable] 2026-02-03 07:16:02.408763 | orchestrator | ok 2026-02-03 07:16:02.415296 | 2026-02-03 07:16:02.415407 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-03 07:16:02.449579 | orchestrator | skipping: Conditional result was False 2026-02-03 07:16:02.458071 | 2026-02-03 07:16:02.458190 | TASK [stage-output : Discover log files for compression] 2026-02-03 07:16:02.481816 | orchestrator | skipping: Conditional result was False 2026-02-03 07:16:02.488608 | 2026-02-03 07:16:02.488713 | LOOP [stage-output : Archive everything from logs] 2026-02-03 07:16:02.524857 | 2026-02-03 07:16:02.525041 | PLAY [Post cleanup play] 2026-02-03 07:16:02.532681 | 2026-02-03 07:16:02.532782 | TASK [Set cloud fact (Zuul deployment)] 2026-02-03 07:16:02.586573 | orchestrator | ok 2026-02-03 07:16:02.598123 | 2026-02-03 07:16:02.598240 | TASK [Set cloud fact (local deployment)] 2026-02-03 07:16:02.622179 | orchestrator | skipping: Conditional result was False 2026-02-03 07:16:02.633427 | 2026-02-03 07:16:02.633547 | TASK [Clean the cloud environment] 2026-02-03 07:16:03.305367 | orchestrator | 2026-02-03 07:16:03 - clean up servers 2026-02-03 07:16:04.139145 | orchestrator | 2026-02-03 07:16:04 - testbed-manager 2026-02-03 07:16:04.240294 | orchestrator | 2026-02-03 07:16:04 - testbed-node-1 2026-02-03 07:16:04.334664 | orchestrator | 2026-02-03 07:16:04 - testbed-node-2 2026-02-03 07:16:04.430039 | orchestrator | 2026-02-03 07:16:04 - testbed-node-3 2026-02-03 07:16:04.530921 | orchestrator | 2026-02-03 07:16:04 - testbed-node-0 2026-02-03 07:16:04.625352 | orchestrator | 2026-02-03 07:16:04 - testbed-node-5 2026-02-03 07:16:04.716541 | orchestrator | 2026-02-03 07:16:04 - testbed-node-4 2026-02-03 07:16:04.808629 | orchestrator | 2026-02-03 07:16:04 - clean up keypairs 2026-02-03 07:16:04.833756 | orchestrator | 2026-02-03 07:16:04 - testbed 2026-02-03 07:16:04.861721 | orchestrator | 2026-02-03 07:16:04 - wait for servers to be gone 2026-02-03 07:16:15.711983 | orchestrator | 2026-02-03 07:16:15 - clean up ports 2026-02-03 07:16:15.925824 | orchestrator | 2026-02-03 07:16:15 - 1dde807b-9cf2-42da-a9d9-489794422608 2026-02-03 07:16:16.385632 | orchestrator | 2026-02-03 07:16:16 - 32c59f2c-7e80-49fc-a0bf-831c0b517e76 2026-02-03 07:16:16.673108 | orchestrator | 2026-02-03 07:16:16 - 691c149d-72be-4225-91b4-8747ac971851 2026-02-03 07:16:16.875272 | orchestrator | 2026-02-03 07:16:16 - 70f312c3-a754-4a94-a2fd-54d7e3dd32e1 2026-02-03 07:16:17.086547 | orchestrator | 2026-02-03 07:16:17 - 8b11fe9b-725a-4bb2-9ee9-80b5b7874ca3 2026-02-03 07:16:17.306547 | orchestrator | 2026-02-03 07:16:17 - 9e9ab637-4635-4621-8af6-8c02c2574b5c 2026-02-03 07:16:17.545115 | orchestrator | 2026-02-03 07:16:17 - d92f8eb0-8450-4a8f-99ef-84ad3fe34707 2026-02-03 07:16:17.757677 | orchestrator | 2026-02-03 07:16:17 - clean up volumes 2026-02-03 07:16:17.884694 | orchestrator | 2026-02-03 07:16:17 - testbed-volume-manager-base 2026-02-03 07:16:17.924012 | orchestrator | 2026-02-03 07:16:17 - testbed-volume-4-node-base 2026-02-03 07:16:17.965336 | orchestrator | 2026-02-03 07:16:17 - testbed-volume-2-node-base 2026-02-03 07:16:18.009102 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-3-node-base 2026-02-03 07:16:18.051987 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-1-node-base 2026-02-03 07:16:18.094697 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-5-node-base 2026-02-03 07:16:18.147813 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-0-node-base 2026-02-03 07:16:18.192016 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-1-node-4 2026-02-03 07:16:18.232189 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-6-node-3 2026-02-03 07:16:18.272482 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-5-node-5 2026-02-03 07:16:18.312794 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-0-node-3 2026-02-03 07:16:18.353859 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-4-node-4 2026-02-03 07:16:18.393371 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-3-node-3 2026-02-03 07:16:18.432856 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-8-node-5 2026-02-03 07:16:18.473037 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-7-node-4 2026-02-03 07:16:18.512190 | orchestrator | 2026-02-03 07:16:18 - testbed-volume-2-node-5 2026-02-03 07:16:18.552046 | orchestrator | 2026-02-03 07:16:18 - disconnect routers 2026-02-03 07:16:18.660497 | orchestrator | 2026-02-03 07:16:18 - testbed 2026-02-03 07:16:19.672343 | orchestrator | 2026-02-03 07:16:19 - clean up subnets 2026-02-03 07:16:19.710983 | orchestrator | 2026-02-03 07:16:19 - subnet-testbed-management 2026-02-03 07:16:19.863981 | orchestrator | 2026-02-03 07:16:19 - clean up networks 2026-02-03 07:16:20.049228 | orchestrator | 2026-02-03 07:16:20 - net-testbed-management 2026-02-03 07:16:20.353637 | orchestrator | 2026-02-03 07:16:20 - clean up security groups 2026-02-03 07:16:20.416441 | orchestrator | 2026-02-03 07:16:20 - testbed-node 2026-02-03 07:16:20.528536 | orchestrator | 2026-02-03 07:16:20 - testbed-management 2026-02-03 07:16:20.662732 | orchestrator | 2026-02-03 07:16:20 - clean up floating ips 2026-02-03 07:16:20.699417 | orchestrator | 2026-02-03 07:16:20 - 81.163.192.115 2026-02-03 07:16:21.087073 | orchestrator | 2026-02-03 07:16:21 - clean up routers 2026-02-03 07:16:21.650295 | orchestrator | 2026-02-03 07:16:21 - testbed 2026-02-03 07:16:23.194611 | orchestrator | ok: Runtime: 0:00:20.031201 2026-02-03 07:16:23.198589 | 2026-02-03 07:16:23.198751 | PLAY RECAP 2026-02-03 07:16:23.198933 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-03 07:16:23.199006 | 2026-02-03 07:16:23.327569 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-03 07:16:23.328577 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-03 07:16:24.034244 | 2026-02-03 07:16:24.034406 | PLAY [Cleanup play] 2026-02-03 07:16:24.050332 | 2026-02-03 07:16:24.050461 | TASK [Set cloud fact (Zuul deployment)] 2026-02-03 07:16:24.103049 | orchestrator | ok 2026-02-03 07:16:24.110592 | 2026-02-03 07:16:24.110727 | TASK [Set cloud fact (local deployment)] 2026-02-03 07:16:24.145509 | orchestrator | skipping: Conditional result was False 2026-02-03 07:16:24.162920 | 2026-02-03 07:16:24.163088 | TASK [Clean the cloud environment] 2026-02-03 07:16:25.364292 | orchestrator | 2026-02-03 07:16:25 - clean up servers 2026-02-03 07:16:25.879152 | orchestrator | 2026-02-03 07:16:25 - clean up keypairs 2026-02-03 07:16:25.893929 | orchestrator | 2026-02-03 07:16:25 - wait for servers to be gone 2026-02-03 07:16:25.935469 | orchestrator | 2026-02-03 07:16:25 - clean up ports 2026-02-03 07:16:26.022322 | orchestrator | 2026-02-03 07:16:26 - clean up volumes 2026-02-03 07:16:26.085749 | orchestrator | 2026-02-03 07:16:26 - disconnect routers 2026-02-03 07:16:26.113310 | orchestrator | 2026-02-03 07:16:26 - clean up subnets 2026-02-03 07:16:26.136129 | orchestrator | 2026-02-03 07:16:26 - clean up networks 2026-02-03 07:16:26.289908 | orchestrator | 2026-02-03 07:16:26 - clean up security groups 2026-02-03 07:16:26.322623 | orchestrator | 2026-02-03 07:16:26 - clean up floating ips 2026-02-03 07:16:26.348983 | orchestrator | 2026-02-03 07:16:26 - clean up routers 2026-02-03 07:16:26.703708 | orchestrator | ok: Runtime: 0:00:01.431371 2026-02-03 07:16:26.705624 | 2026-02-03 07:16:26.705710 | PLAY RECAP 2026-02-03 07:16:26.705762 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-03 07:16:26.705787 | 2026-02-03 07:16:26.824704 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-03 07:16:26.827304 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-03 07:16:27.548977 | 2026-02-03 07:16:27.549138 | PLAY [Base post-fetch] 2026-02-03 07:16:27.564392 | 2026-02-03 07:16:27.564522 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-03 07:16:27.620072 | orchestrator | skipping: Conditional result was False 2026-02-03 07:16:27.634444 | 2026-02-03 07:16:27.634642 | TASK [fetch-output : Set log path for single node] 2026-02-03 07:16:27.681783 | orchestrator | ok 2026-02-03 07:16:27.689315 | 2026-02-03 07:16:27.689442 | LOOP [fetch-output : Ensure local output dirs] 2026-02-03 07:16:28.176317 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/ddf3637b028d45358890c8bfcc4ea9a8/work/logs" 2026-02-03 07:16:28.432260 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ddf3637b028d45358890c8bfcc4ea9a8/work/artifacts" 2026-02-03 07:16:28.699814 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ddf3637b028d45358890c8bfcc4ea9a8/work/docs" 2026-02-03 07:16:28.715444 | 2026-02-03 07:16:28.715567 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-03 07:16:29.632692 | orchestrator | changed: .d..t...... ./ 2026-02-03 07:16:29.633223 | orchestrator | changed: All items complete 2026-02-03 07:16:29.633297 | 2026-02-03 07:16:30.351854 | orchestrator | changed: .d..t...... ./ 2026-02-03 07:16:31.065026 | orchestrator | changed: .d..t...... ./ 2026-02-03 07:16:31.103807 | 2026-02-03 07:16:31.104054 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-03 07:16:31.141874 | orchestrator | skipping: Conditional result was False 2026-02-03 07:16:31.147959 | orchestrator | skipping: Conditional result was False 2026-02-03 07:16:31.175957 | 2026-02-03 07:16:31.176085 | PLAY RECAP 2026-02-03 07:16:31.176166 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-03 07:16:31.176208 | 2026-02-03 07:16:31.303251 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-03 07:16:31.307245 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-03 07:16:32.016969 | 2026-02-03 07:16:32.017200 | PLAY [Base post] 2026-02-03 07:16:32.031783 | 2026-02-03 07:16:32.031938 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-03 07:16:33.022258 | orchestrator | changed 2026-02-03 07:16:33.032467 | 2026-02-03 07:16:33.032592 | PLAY RECAP 2026-02-03 07:16:33.032669 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-03 07:16:33.032745 | 2026-02-03 07:16:33.147505 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-03 07:16:33.149887 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-03 07:16:33.945300 | 2026-02-03 07:16:33.945470 | PLAY [Base post-logs] 2026-02-03 07:16:33.956116 | 2026-02-03 07:16:33.956251 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-03 07:16:34.407809 | localhost | changed 2026-02-03 07:16:34.418364 | 2026-02-03 07:16:34.418506 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-03 07:16:34.453713 | localhost | ok 2026-02-03 07:16:34.456739 | 2026-02-03 07:16:34.456836 | TASK [Set zuul-log-path fact] 2026-02-03 07:16:34.471357 | localhost | ok 2026-02-03 07:16:34.479599 | 2026-02-03 07:16:34.479707 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-03 07:16:34.504129 | localhost | ok 2026-02-03 07:16:34.507173 | 2026-02-03 07:16:34.507275 | TASK [upload-logs : Create log directories] 2026-02-03 07:16:35.010633 | localhost | changed 2026-02-03 07:16:35.013587 | 2026-02-03 07:16:35.013700 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-03 07:16:35.504577 | localhost -> localhost | ok: Runtime: 0:00:00.004254 2026-02-03 07:16:35.513493 | 2026-02-03 07:16:35.513675 | TASK [upload-logs : Upload logs to log server] 2026-02-03 07:16:36.095482 | localhost | Output suppressed because no_log was given 2026-02-03 07:16:36.097343 | 2026-02-03 07:16:36.097444 | LOOP [upload-logs : Compress console log and json output] 2026-02-03 07:16:36.144406 | localhost | skipping: Conditional result was False 2026-02-03 07:16:36.150212 | localhost | skipping: Conditional result was False 2026-02-03 07:16:36.153461 | 2026-02-03 07:16:36.153564 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-03 07:16:36.209075 | localhost | skipping: Conditional result was False 2026-02-03 07:16:36.209634 | 2026-02-03 07:16:36.213301 | localhost | skipping: Conditional result was False 2026-02-03 07:16:36.225929 | 2026-02-03 07:16:36.226142 | LOOP [upload-logs : Upload console log and json output]